id
stringlengths 10
10
| title
stringlengths 5
246
| abstract
stringlengths 42
3.32k
| authors
stringlengths 5
21.5k
| published_date
timestamp[s] | link
stringlengths 33
34
| markdown
stringlengths 140
1.08M
| abstract_ja
stringlengths 0
1.35k
|
---|---|---|---|---|---|---|---|
2304.05657 | Towards pore-scale simulation of combustion in porous media using a
low-Mach hybrid lattice Boltzmann/finite difference solver | A hybrid numerical model previously developed for combustion simulations is
extended in this article to describe flame propagation and stabilization in
porous media. The model, with a special focus on flame/wall interaction
processes, is validated via corresponding benchmarks involving flame
propagation in channels with both adiabatic and constant-temperature walls.
Simulations with different channel widths show that the model can correctly
capture the changes in flame shape and propagation speed as well as the dead
zone and quenching limit, as found in channels with cold walls. The model is
further assessed considering a pseudo 2-D porous burner involving an array of
cylindrical obstacles at constant temperature, investigated in a companion
experimental study. Furthermore, the model is used to simulate pore-scale flame
dynamics in a randomly-generated 3-D porous media. Results are promising,
opening the door for future simulations of flame propagation in realistic
porous media. | S. A. Hosseini, Dominique Thevenin | 2023-04-12T07:23:01 | http://arxiv.org/abs/2304.05657v1 | Towards pore-scale simulation of combustion in porous media using a low-Mach hybrid lattice Boltzmann/finite difference solver
###### Abstract
A hybrid numerical model previously developed for combustion simulations is extended in this article to describe flame propagation and stabilization in porous media. The model, with a special focus on flame/wall interaction processes, is validated via corresponding benchmarks involving flame propagation in channels with both adiabatic and constant-temperature walls. Simulations with different channel widths show that the model can correctly capture the changes in flame shape and propagation speed as well as the dead zone and quenching limit, as found in channels with cold walls. The model is further assessed considering a pseudo 2-D porous burner involving an array of cylindrical obstacles at constant temperature, investigated in a companion experimental study. Furthermore, the model is used to simulate pore-scale flame dynamics in a randomly-generated 3-D porous media. Results are promising, opening the door for future simulations of flame propagation in realistic porous media.
## I Introduction
Rapid depletion of fossil fuel resources and related pollutant emissions are a consequence of their widespread and abundant use in most areas of industry and technology [1; 2; 3]. Motivated by these two issues, the search for more efficient and eco-friendly energy production technologies and their implementation at the industrial level is growing by the day. Combustion in porous media has been proven to be one promising route to tackle some of the previously-cited challenges. For burners, the concept of porous media can result in high power densities, increased power dynamic range, and low emissions of NO and CO\({}_{2}\)[4]. This is, for the most part, the consequence of the presence of a solid porous matrix which has higher levels of heat capacity, conductivity, and emissivity as compared to the gaseous phase. The concept of combustion in porous media is also present in other eco-friendly technologies, for instance in packed bed reactors with Chemical Looping Combustion that allow for efficient separation of CO\({}_{2}\)[5; 6]. Similar challenges involving intense flame/wall interactions are faced in meso- and micro-combustion found in corresponding burners developed within the context of micro electro-mechanical systems [7; 8]. Given the pronounced impact of flame/solid interactions, the further development of such technologies requires a better understanding of flame/wall interaction dynamics. For this purpose, it is essential to develop numerical models that are able to properly capture such physics with a sufficient level of accuracy.
The topic of flame/wall interaction has been tackled in a variety of articles in the past decades, starting with investigations of head-on quenching [9], mostly to quantify wall heat flux [10]. Such interesting investigations have been going on up to now, involving additional configurations and aspects as well as a variety of fuels [11; 12]. Even more relevant for the present investigations are flames propagating in narrow channels. Corresponding publications and results presented therein point to the very rich physics of the flame front when propagating in such a channel, see for instance [13; 14; 15; 16]. Depending on the ratio of the channel diameter to the flame thickness and on the type of thermal boundary condition at the wall the flame front can take on a wide variety of shapes, most notably, the so-called tulip shape [16]. Extending further this line of research, flame propagation within porous media has also been studied with different levels of complexity, starting with academic configurations in [17]. These preliminary studies led the authors to the conclusion that in the context of flame propagation in porous media, different flame propagation speeds exist, which is in agreement with the different propagation modes observed for flame propagation in channels. While volume-averaged approaches appear to be a cost-efficient tool for simulations of large-size, realistic systems, these observations clearly show the necessity of direct pore-scale simulations for a better understanding of the interaction process.
To the authors' knowledge, apart from [18] where the authors model flame propagation in straight channels and [19] where authors discuss specifically coal combustion, all studies targeting combustion applications in porous media and configurations dominated by flame/wall interactions have been carried out using classical, discrete solvers for the Navier-Stokes-Fourier equations, coupled to balance equations for the individual species. In the low-Mach number limit, to alleviate the limitation in time-step resulting from the presence of acoustic modes, most such solvers rely on the so-called zero-Mach approximation [20], which by virtue of the Helmholtz decomposition of the velocity field brings the Poisson equation into the scheme, see for instance [21]. The elliptic Poisson equation is well-known to be the computational bottleneck of incompressible Navier-Stokes models. To solve this issue, different approaches such as Chorin's artificial compressibility method (ACM) [22] replacing the Poisson equation with a hyperbolic equation for the pressure have been proposed for incompressible flows.
The lattice Boltzmann method (LBM), which emerged in the literature in the late 80's [23], has now achieved widespread success. This is in particular due to the fully hyperbolic nature
of all involved equations. In addition, and as an advantage over ACM, normal acoustic modes are also subject to dissipation and, therefore, are governed by a parabolic partial differential equation allowing the LBM to efficiently tackle unsteady flows. Following up on the same idea, we recently proposed an algorithm for low-Mach thermo-compressible flows based on the lattice Boltzmann method [24; 25; 26]. Different from other LBM approaches proposed in recent years for combustion simulation [18; 19; 27], this scheme is specifically tailored for the low-Mach regime. While this model has been successfully used for large-eddy simulations (LES) of flames in complex geometries, in particular swirl burners [28], detailed interactions between flame fronts and walls have not been considered in detail up to now, since they did not play a central role for the considered systems.
In this study a corresponding validation of the solver is proposed, including boundary conditions for curved walls. Configurations of increasing complexity are considered, such as flame propagation in narrow channels of different widths involving different thermal boundary conditions, as well as combustion in a reference 2-D packed bed reactor corresponding to a companion experimental study. Note that the so-called pores considered in the present study are large, being indeed inter-particle spaces at the millimeter or centimeter scale, and not restricted to a few micrometers, as found in many other applications. In this article, the terms pore and inter-particle space are used interchangeably to designate the same configuration.
After a brief refresher of the model itself, along with its multiple relaxation time (MRT) cumulants realization, a discussion of the boundary conditions is proposed for both the lattice Boltzmann and the finite-difference (FD) solvers. Afterwards, results from the different validation cases are presented and discussed, before conclusion.
## II Theoretical background
### Governing equations
The model used here and detailed in the next subsections targets the low-Mach approximation to describe thermo-compressible reacting flows [29]. The species mass balance equation reads in non-conservative form:
\[\partial_{t}Y_{k}+\mathbf{u}\cdot\mathbf{\nabla}Y_{k}+\frac{1}{\rho}\mathbf{\nabla}\cdot \rho\mathbf{V}_{k}Y_{k}=\frac{\omega_{k}}{\rho}, \tag{1}\]
where \(Y_{k}\) is the \(k^{\text{th}}\) species mass fraction, \(\rho\) the local density, \(\mathbf{u}\) the mixture velocity, and \(\omega_{k}\) the source term due to chemical reactions. The mass flux due to diffusion, \(Y_{k}Y_{k}\), is given by:
\[Y_{k}\mathbf{V}_{k}=-\frac{D_{k}W_{k}}{W}\mathbf{\nabla}X_{k}+Y_{k}\sum_{k^{\prime}=1} ^{N_{\text{sp}}}\frac{D_{k^{\prime}}W_{k^{\prime}}}{W}\mathbf{\nabla}X_{k^{\prime}} \tag{2}\]
where \(X_{k}\), \(W_{k}\) and \(D_{k}\) are respectively the \(k^{\text{th}}\) species mole fraction, molar mass and mixture-averaged diffusion coefficient. \(W\) is the mixture molar mass. The second term corresponds to the correction velocity ensuring local conservation of total mass (i.e., \(\sum_{k=1}^{N_{\text{sp}}}Y_{k}V_{k}=0\)).
The momentum balance equation (Navier-Stokes) reads:
\[\partial_{t}(\rho\mathbf{u})+\mathbf{\nabla}\cdot(\rho\mathbf{u}\otimes\mathbf{u})+\mathbf{\nabla }\cdot\mathbf{S}=0, \tag{3}\]
where the stress is:
\[\mathbf{S}=P_{h}\mathbf{I}-\mu\left(\mathbf{\nabla}\mathbf{u}+\mathbf{\nabla}\mathbf{u}^{t}-\frac{2} {D}\mathbf{\nabla}\cdot\mathbf{u}\mathbf{I}\right)-\eta\mathbf{\nabla}\cdot\mathbf{u}\mathbf{I}. \tag{4}\]
in which \(\mu\) and \(\eta\) are the mixture-averaged dynamic and bulk viscosity coefficients and \(P_{h}\) is the hydrodynamic pressure tied to the total pressure as \(P=P_{0}+P_{h}\), with \(P_{0}\) the uniform thermodynamic pressure. The employed closure for the hydrodynamic pressure \(P_{h}\) reads:
\[\frac{1}{\rho c_{s}^{2}}\partial_{t}P_{h}+\mathbf{\nabla}\cdot\mathbf{u}=\Lambda, \tag{5}\]
where \(c_{s}\) is the characteristic propagation speed of normal modes, also known as sound speed. At the difference of a truly compressible model, here \(c_{s}\) is not necessarily the physical sound speed. Using the continuity equation and the ideal gas mixture equation of state, one gets:
\[\Lambda=\frac{\partial_{t}T+\mathbf{u}\cdot\mathbf{\nabla}T}{T}+\sum_{k=1}^{N_{\text{ sp}}}\frac{W}{W_{k}}\left(\partial_{t}Y_{k}+\mathbf{u}\cdot\mathbf{\nabla}Y_{k}\right). \tag{6}\]
Finally the energy balance equation is given by
\[\rho\mathbf{c}_{p}\left(\partial_{t}T+\mathbf{u}\cdot\mathbf{\nabla}T\right)- \mathbf{\nabla}\cdot\left(\lambda\mathbf{\nabla}T\right)\\ +\rho\left(\sum_{k=1}^{N_{\text{sp}}}c_{p_{k}}Y_{k}\mathbf{V}_{k}\right) \cdot\mathbf{\nabla}T=\omega_{T}, \tag{7}\]
where \(c_{p_{k}}\) and \(c_{p}\) are respectively the \(k^{\text{th}}\) species and the mixture specific heat capacities and \(\lambda\) is the thermal diffusion coefficient.
One point that is to be noted is the difference of the current low-Mach set of equations with the zero-Mach model of Majjda and the low-Mach model of Toutant; Setting \(c_{s}\) to be the real sound speed in Eq. 5 reduces it to that of [30], but now for a multi-species reacting system. On the other hand, in the limit of \(c_{s}\rightarrow\infty\) one ends up with Majjda's zero-Mach limit [20], i.e. \(\mathbf{\nabla}\cdot\mathbf{u}=\Lambda\). A detailed perturbation analysis of this system would be interesting but will be left for future publications. In the next section the lattice Boltzmann model used to recover the corresponding hydrodynamic limit is briefly introduced.
### Lattice Boltzmann model
To solve the low-Mach aerodynamic equations, we use a lattice Boltzmann model that we have developed in previous works [24; 25; 26]:
\[g_{i}(\mathbf{r}+c_{i}\delta\mathbf{r},t+\delta t)-g_{i}(\mathbf{r},t)=\Omega_{i}+\delta t \Xi_{i}, \tag{8}\]
where \(g_{i}\) are discrete populations, \(\mathbf{c}_{i}\) corresponding discrete velocities, \(\mathbf{r}\) and \(t\) the position in space and time, \(\delta t\) the time-step size and
\[\Xi_{i}=c_{s}^{2}\left(f_{i}^{\text{eq}}/\rho-w_{i}\right)\left(\mathbf{c}_{i}-\mathbf{ u}\right)\cdot\mathbf{\nabla}\rho+w_{i}\rho c_{s}^{2}\Lambda. \tag{9}\]
Here, \(W_{k}\) is the molar mass of species \(k\) and \(W\) the average molar mass, \(N_{\text{sp}}\) the number of species, \(w_{i}\) the weights associated to each discrete velocity in the lattice Boltzmann solver and \(c_{s}\) the lattice sound speed tied to the time-step and grid size \(\delta r\) as \(c_{s}=\delta r/\sqrt{3}\delta t\). The equilibrium distribution function, \(f_{i}^{\text{eq}}\), is given by:
\[f_{i}^{\text{eq}}=w_{i}\rho\left(1+\frac{\mathbf{c}_{i}\cdot\mathbf{u}}{c_{s}^{2}}+ \frac{(\mathbf{c}_{i}\cdot\mathbf{u})^{2}}{2c_{s}^{4}}-\frac{\mathbf{u}^{2}}{2c_{s}^{2}} \right). \tag{10}\]
The collision term \(\Omega_{i}\) is defined as:
\[\Omega_{i}=-\omega_{s}\left(g_{i}-g_{i}^{\text{eq}}\right), \tag{11}\]
where
\[g_{i}^{\text{eq}}=w_{i}(P_{h}-\rho c_{s}^{2})+c_{s}^{2}f_{i}^{\text{eq}}, \tag{12}\]
and \(P_{h}\) is the hydrodynamic pressure. In the present study first-neighbour stencils based on third-order quadratures are used, i.e. D2Q9 and D3Q27. The hydrodynamic pressure and momentum are computed as moments of the distribution function \(g_{i}\):
\[P_{h} =\sum_{i=1}^{Q}g_{i}+\frac{\delta t}{2}\rho c_{s}^{2}\Lambda, \tag{13a}\] \[\rho\,\mathbf{u} =\frac{1}{c_{s}^{2}}\sum_{i=1}^{Q}c_{i}g_{i}. \tag{13b}\]
This lattice Boltzmann model recovers the previously introduced pressure evolution equation along with the Navier-Stokes equation. In the viscous stress tensor deviations from Galilean invariance are limited to third order.
### Implementation of the Multiple Relaxation Times (MRT) collision operator
In the context of the present study, following our proposals for both multi-phase and multi-species flows [31; 28], the Cumulants-based operator is used [32]. The post-collision populations \(g_{i}^{*}\) are computed as:
\[g_{i}^{*}=\rho c_{s}^{2}{f_{i}^{{}^{\prime}}}^{*}+\frac{\delta t}{2}\Xi_{i}, \tag{14}\]
where the post-collision pre-conditioned populations \({f_{i}^{{}^{\prime}}}^{*}\) are:
\[{f_{i}^{{}^{\prime}}}^{*}=\mathcal{M}^{-1}\left(\mathcal{I}-\mathcal{W}\right) \mathcal{K}^{{}^{\prime}}+\mathcal{M}^{-1}\mathcal{W}\mathcal{K}^{{}^{\prime}}, \tag{15}\]
In this equation, \(\mathcal{M}\) is the moments transform matrix from pre-conditioned populations to the target momentum space, \(\mathcal{I}\) the identity matrix and \(\mathcal{W}\) the diagonal relaxation frequencies matrix
\[\mathcal{W}=\text{diag}(\omega_{0},\omega_{x},\omega_{y},...,\omega_{\text{ xxyzz}}), \tag{16}\]
where the operator diag is defined as:
\[\text{diag}(\mathbf{A})=(\mathbf{A}\otimes\mathbf{1})\circ\mathcal{I}, \tag{17}\]
with \(\mathbf{A}\) a given vector and \(\mathbf{1}\) a vector with elements 1. The relaxation frequencies of second-order shear moments, e.g. \(x\)y (here shown with \(\omega_{s}\) for the sake of readability) are defined as:
\[\omega_{s}=\frac{\nu}{c_{s}^{2}\delta t}+\frac{1}{2}, \tag{18}\]
where \(\nu\) is the local effective kinematic viscosity. Prior to transformation to momentum space the populations are preconditioned as:
\[f_{i}^{\prime}=\frac{1}{\rho c_{s}^{2}}g_{i}+\frac{\delta t}{2\rho c_{s}^{2}} \Xi_{i}. \tag{19}\]
This pre-conditioning accomplishes two tasks, 1) normalizing the populations with the density \(-\) and thus eliminating the density-dependence of the moments -, and 2) introducing the first half of the source term. As such the moments \(\mathcal{K}\) are computed as:
\[\mathcal{K}_{j}^{{}^{\prime}}=\mathcal{M}_{ij}f_{i}^{{}^{\prime}}. \tag{20}\]
The Cumulants \(\mathcal{K}_{j}\) are computed from the central moments of the distribution function, these central moments being defined as:
\[\widetilde{\Pi}_{x^{\prime}y^{\prime}z^{\prime}}^{{}^{\prime}}=\sum_{i}\left(c _{i,x}-u_{x}\right)^{p}(c_{i,y}-u_{y})^{q}(c_{i,z}-u_{z})^{r}f_{\alpha}^{{}^{ \prime}}. \tag{21}\]
As noted in [32], up to order three Cumulants are identical to their central moments counter-parts. At higher orders they are computed as:
\[\mathcal{K}_{xxyzz}^{{}^{\prime}} =\widetilde{\Pi}_{xxyzz}^{{}^{\prime}}-\widetilde{\Pi}_{xx}^{{}^{ \prime}}\widetilde{\Pi}_{yz}^{{}^{\prime}}-2\widetilde{\Pi}_{xy}^{{}^{\prime} }\widetilde{\Pi}_{xz}^{{}^{\prime}},\] (22a) \[\mathcal{K}_{xxyxy}^{{}^{\prime}} =\widetilde{\Pi}_{xxyyz}^{{}^{\prime}}-\widetilde{\Pi}_{xz}^{{}^{ \prime}}\widetilde{\Pi}_{yyz}^{{}^{\prime}}-2\widetilde{\Pi}_{xy}^{{}^{\prime} }\widetilde{\Pi}_{xy}^{{}^{\prime}},\] (22b) \[\mathcal{K}_{xxyzz}^{{}^{\prime}} =\widetilde{\Pi}_{xyzz}^{{}^{\prime}}-\widetilde{\Pi}_{yyzz}^{{}^{ \prime}}\widetilde{\Pi}_{xyxy}^{{}^{\prime}}-\widetilde{\Pi}_{yy}^{{}^{\prime} }\widetilde{\Pi}_{xzz}^{{}^{\prime}}\] \[\mathcal{K}_{xxyyz}^{{}^{\prime}} =\widetilde{\Pi}_{xxyyz}^{{}^{\prime}}-4\widetilde{\Pi}_{xy}^{{}^{ \prime}}\widetilde{\Pi}_{xyz}^{{}^{\prime}}-\widetilde{\Pi}_{xx}^{{}^{\prime} }\widetilde{\Pi}_{yyzz}^{{}^{\prime}}\] \[\mathcal{K}_{xxyyz}^{{}^{\prime}} =\widetilde{\Pi}_{xxyyz}^{{}^{\prime}}-4\widetilde{\Pi}_{xy}^{{}^{ \prime}}\widetilde{\Pi}_{xyz}^{{}^{\prime}}-\widetilde{\Pi}_{xx}^{{}^{\prime} }\widetilde{\Pi}_{yyzz}^{{}^{\prime}}\] \[\mathcal{K}_{xxyyz}^{{}^{\prime}} =\widetilde{\Pi}_{xxyyz}^{{}^{\prime}}-4\widetilde{\Pi}_{xy}^{{}^{ \prime}}\widetilde{\Pi}_{xyz}^{{}^{\prime}}-\widetilde{\Pi}_{xx}^{{}^{\prime} }\widetilde{\Pi}_{yyzz}^{{}^{\prime}}\] \[\mathcal{K}_{xxyyz}^{{}^{\prime}} =\widetilde{\Pi}_{xxyyz}^{{}^{\prime}}-4\widetilde{\Pi}_{xy}^{{}^{ \prime}}\widetilde{\Pi}_{xyz}^{{}^{\prime}}-\widetilde{\Pi}_{xx}^{{}^{\prime} }\widetilde{\Pi}_{yyzz}^{{}^{\prime}}\] \[\mathcal{K}_{xxyyz}^{{}^{\prime}} =\widetilde{\Pi}_{xxyyz}^{{}^{\prime}}-4\widetilde{\Pi}_{xy}^{{}^{ \prime}}\widetilde{\Pi}_{xyz}^{{}^{\prime}}-\widetilde{\Pi}_{xx}^{{}^{\prime} }\widetilde{\Pi}_{yyzz}^{{}^{\prime}}\] \[\mathcal{K}_{xxyyz}^{{}^{\prime}} =\widetilde{\Pi}_{xxyyz}^{{}^{\prime}}-4\widetilde{\Pi}_{xy}^{{}^{ \prime}}\widetilde{\Pi}_{xyz}^{{}^{\prime}}-\widetilde{\Pi}_{xx}^{{}^{\prime} }\widetilde{\Pi}_{yyzz}^{{}^{\prime}}\] \[\mathcal{K}_{xxyyz}^{{}^{\prime}} =\widetilde{\Pi}_{xxyyz}^{{}^{\prime}}-4\widetilde{\Pi}_{xy}^{{}^{ \prime}}\widetilde{\Pi}_{xyz}^{{}^{\prime}}-4\widetilde{\Pi}_{yx}^{{}^{\prime} }\widetilde{\Pi}_{xyz}^{{}^{\prime}}-4\widetilde{\Pi}_{xy}^{{}^{\prime}} \widetilde{\Pi}_{xyz}^{{}^{\prime}}\] \[\mathcal{K}_{xxy}^{{}^{\prime}} =\widetilde{\Pi}_{xx}^{{}^{\prime}}\widetilde{\Pi}_{yy}^{{}^{ \prime}}-4\widetilde{\Pi}_{xy}^{{}^{\prime}}\widetilde{\Pi}_{xyz}^{{}^{ \prime}}-4\widetilde{\Pi}_{xy}^{{}^{\prime}}\widetilde{\Pi}_{xyz}^{{}^{\prime}}\] \[\mathcal{K}_{xxy}^{{}^{\prime}} =\widetilde{\Pi}_{xxy}^{{}^{\prime}}\widetilde{\Pi}_{xy}^{{}^{ \prime}}\widetilde{\Pi}_{xz}^{{}^{\prime}}\] (22d
with \(\omega_{j}\) the relaxation frequency of Cumulant \(j\). After collision, the Cumulants \(\mathcal{K}_{j}^{*}\) have to be transformed back into populations \(f_{i}^{{}^{\prime}s}\). The first step, as for the forward transformation is to get the corresponding central moments. Given that up to order three central moments and Cumulants are the same, we only give here the backward transformation of higher-order moments:
\[\widetilde{\Pi}_{\text{xxyz}}^{*} =\mathcal{K}_{xxyz}^{{}^{\prime}*}+\widetilde{\Pi}_{xx}^{{}^{ \prime}s}\widetilde{\Pi}_{yz}^{{}^{\prime}*}+2\widetilde{\Pi}_{xy}^{{}^{ \prime}s}\widetilde{\Pi}_{xz}^{{}^{\prime}*} \tag{24a}\] \[\widetilde{\Pi}_{\text{xxyz}}^{*} =\mathcal{K}_{xxy}^{{}^{\prime}*}+\widetilde{\Pi}_{xx}^{*} \widetilde{\Pi}_{yz}^{{}^{\prime}*}+2\widetilde{\Pi}_{xy}^{{}^{\prime}s} \widetilde{\Pi}_{xy}^{{}^{\prime}*}\] (24b) \[\widetilde{\Pi}_{xyxyx}^{{}^{\prime}*} =\mathcal{K}_{xyxy}^{{}^{\prime}*}+\widetilde{\Pi}_{xz}^{*} \widetilde{\Pi}_{xy}^{{}^{\prime}*}+\widetilde{\Pi}_{xy}^{{}^{\prime}s} \widetilde{\Pi}_{xz}^{{}^{\prime}*}+4\widetilde{\Pi}_{yz}^{{}^{\prime}s} \widetilde{\Pi}_{xyz}^{{}^{\prime}*}\] \[\quad+2\widetilde{\Pi}_{xz}^{{}^{\prime}*}\widetilde{\Pi}_{yz}^{ {}^{\prime}*}+2\widetilde{\Pi}_{xy}^{{}^{\prime}*}\widetilde{\Pi}_{yzx}^{{}^{ \prime}*}\] (24c) \[\widetilde{\Pi}_{xxyzz}^{{}^{\prime}*} =\mathcal{K}_{xxyzxyz}^{{}^{\prime}*}+4\widetilde{\Pi}_{xyz}^{{ }^{\prime}*}\widetilde{\Pi}_{xyz}^{{}^{\prime}*}+4\widetilde{\Pi}_{xz}^{{}^{ \prime}*}\widetilde{\Pi}_{yyz}^{{}^{\prime}*}\] \[\quad+\widetilde{\Pi}_{xy}^{{}^{\prime}s}\widetilde{\Pi}_{xzxz}^{ {}^{\prime}*}+\widetilde{\Pi}_{xy}^{{}^{\prime}*}\widetilde{\Pi}_{xyz}^{{}^{ \prime}*}+4\widetilde{\Pi}_{yz}^{{}^{\prime}s}\widetilde{\Pi}_{xyzxyz}^{{}^{ \prime}*}\] \[\quad+4\widetilde{\Pi}_{xz}^{{}^{\prime}*}\widetilde{\Pi}_{xyz}^{ {}^{\prime}*}+4\widetilde{\Pi}_{xy}^{{}^{\prime}s}\widetilde{\Pi}_{yz}^{{}^{ \prime}*}+2\widetilde{\Pi}_{xy}^{{}^{\prime}s}\widetilde{\Pi}_{xzxz}^{{}^{ \prime}*}\] \[\quad+2\widetilde{\Pi}_{xz}^{{}^{\prime}s}\widetilde{\Pi}_{yz}^{ {}^{\prime}*}+2\widetilde{\Pi}_{xz}^{{}^{\prime}s}\widetilde{\Pi}_{yz}^{{}^{ \prime}*}+16\widetilde{\Pi}_{xy}^{{}^{\prime}s}\widetilde{\Pi}_{yz}^{{}^{ \prime}*}\widetilde{\Pi}_{yz}^{{}^{\prime}*}\] \[\quad+4\widetilde{\Pi}_{xz}^{{}^{\prime}s}\widetilde{\Pi}_{xyz}^{ {}^{\prime}*}+4\widetilde{\Pi}_{xy}^{{}^{\prime}s}\widetilde{\Pi}_{yz}^{{}^{ \prime}*}+4\widetilde{\Pi}_{yz}^{{}^{\prime}s}\widetilde{\Pi}_{xyzxyz}^{{}^{ \prime}*}\] \[\quad+2\widetilde{\Pi}_{xz}^{{}^{\prime}s}\widetilde{\Pi}_{yy}^{ {}^{\prime}*}\widetilde{\Pi}_{zz}^{{}^{\prime}*}. \tag{24d}\]
Once central moments have been obtained the inverse of the central moments transform tensor is used to compute the corresponding populations.
### Solver for species and energy balance equations
In the context of the present study the species and energy balance laws (Eqs. 1 and 7) are solved using finite differences. To prevent the formation of Gibbs oscillations at sharp interfaces, convective terms are discretized using a third-order weighted essentially non-oscillatory (WENO) scheme while diffusion terms are treated via a fourth-order central scheme. Near boundary nodes, to prevent any nonphysical interaction of the smoothness indicator with ghost nodes, a centered second-order scheme is used to discretize the convection term. Global mass conservation of the species balance equation, i.e. \(\sum_{k}Y_{k}=1\), while naturally satisfied for classical discretizations of the convection term, for instance in 1-D:
\[\frac{u_{x}}{2\delta r}\sum_{k}\left[Y_{k}(x+\delta r)-Y_{k}(x-\delta r) \right]=0. \tag{25}\]
is not necessarily satisfied for WENO schemes, as coefficients weighing contributions of each stencil are not the same for all species. To guarantee conservation of overall mass the concept of correction speed is used as for the diffusion model; Representing the discretization via an operator \(\mathcal{L}\) the discrete convection term is computed as:
\[\mathbf{u}\cdot\mathbf{\nabla}Y_{k}=\mathbf{u}\cdot\left[\mathcal{L}(\mathbf{\nabla}Y_{k})-Y _{k}\sum_{k^{\prime}}\mathcal{L}(\mathbf{\nabla}Y_{k^{\prime}})\right] \tag{26}\]
which - once summed up over all species - gives:
\[\sum_{k}\mathbf{u}\cdot\mathbf{\nabla}Y_{k}=\mathbf{u}\cdot\left[\sum_{k}\mathcal{L}(\mathbf{ \nabla}Y_{k})-\sum_{k^{\prime}}\mathcal{L}(\mathbf{\nabla}Y_{k^{\prime}})\right]=0. \tag{27}\]
All equations are discretized in time using a first-order Euler approach. Transport and thermodynamic properties of the mixture along with the kinetic scheme are taken into account via the open-source library Cantera, coupled to our in-house solver ALBORZ [25]. Details of the coupling can be found in [33].
## III Boundary conditions
### Lattice Boltzmann solver
In the context of the present study three types of boundary conditions are needed for the lattice Boltzmann solver, namely wall, inflow, and outflow boundary conditions. A brief overview of these boundary conditions is given in what follows.
Solid boundaries are modeled using the half-way bounce-back scheme. For this purpose, missing populations are computed as [34]:
\[f_{i}\left(\mathbf{r},t+\delta t\right)=f_{i}^{*}\left(\mathbf{r},t\right), \tag{28}\]
where \(f_{i}^{*}\) is the post-collision population (prior to streaming) and \(\vec{i}\) is the index of the particle velocity opposite that of \(i\). To take into account wall curvature the interpolated half-way bounce back approach is used [35]. At a given boundary node \(\mathbf{r}_{f}\), the missing incoming populations are computed as:
\[f_{i}(\mathbf{r}_{f},t+\delta t) =2qf_{i}(\mathbf{r}_{f}+\mathbf{c}_{i},t+\delta t)\] \[+(1-2q)f_{i}(\mathbf{r}_{f},t+\delta t),\forall q<\frac{1}{2}, \tag{29a}\] \[f_{i}(\mathbf{r}_{f},t+\delta t) =\frac{1}{2q}f_{i}(\mathbf{r}_{f}+\mathbf{c}_{i},t+\delta t)\] \[+\frac{2q-1}{2q}f_{i}(\mathbf{r}_{f},t+\delta t),\forall q\geq\frac{1} {2}, \tag{29b}\]
where \(\vec{i}\) designates the direction opposite \(i\) and \(q\) reads:
\[q=\frac{||\mathbf{r}_{f}-\mathbf{r}_{s}||}{||\mathbf{c}_{i}||}, \tag{30}\]
with \(\mathbf{r}_{s}\) denoting the wall position along direction \(i\).
For inlet boundary conditions a modified version of the half-way bounce-back scheme is used to impose a target inlet velocity vector \(\mathbf{u}_{\text{in}}\). To that end the missing populations are computed as:
\[f_{i}\left(\mathbf{r},t+\delta t\right)=f_{i}^{*}\left(\mathbf{r},t\right)+(w_{i}+w_{i}) \rho_{\text{in}}\mathbf{u}_{\text{in}}\cdot\mathbf{c}_{i}. \tag{31}\]
In addition to velocity boundary conditions, a modified non-reflecting version of the zero-gradient boundary condition is also employed [34] at the outlet, as first introduced in [32].
The missing populations at the outflow boundary are defined as:
\[f_{i}\left(\mathbf{r},t+\delta t\right) = f_{i}\left(\mathbf{r}-\mathbf{n}\delta r,t\right)\left(c_{s}-\mathbf{u}(\mathbf{r },t)\cdot\mathbf{n}\right) \tag{32}\] \[+f_{i}\left(\mathbf{r}\delta r,t\right)\left(\frac{\delta r}{\delta t }-c_{s}+\mathbf{u}(\mathbf{r},t)\cdot\mathbf{n}\right),\]
where \(\mathbf{n}\) is the outward-pointing unit vector normal to the boundary surface.
### Energy and Species fields
In addition to the application of boundary conditions to the discrete populations, given that the model involves derivatives of macroscopic properties such as density, appropriate measures have to be taken.
For the finite-difference solver and all terms involving this approximation, the boundary conditions are implemented via the image/ghost node method [36; 37; 38]. Representing the macroscopic parameter of interest with the generic variable \(\phi\), for a Dirichlet boundary condition for instance, one would have:
\[\phi(B)=\phi_{B}, \tag{33}\]
where \(B\) refers to the position of the boundary, shown in Fig. 1. The _virtual_ field value \(\phi(G)\) in the ghost node, the discrete grid-point outside the fluid domain neighboring the boundary (Fig. 1) is computed as:
\[\phi(G)=2\phi_{B}-\phi(I), \tag{34}\]
where \(I\) is the image point in the fluid domain placed such that \(\overline{GB}=\overline{BI}\) with both line segments perpendicular to the boundary interface. Since the image node does not necessarily fall on a grid-point it is reconstructed using data from neighboring grid points. For the reconstruction process to be robust with respect to the wall geometry, Shepard's inverse distance weighting is used [39]:
\[\phi(r_{I})=\sum_{j=1}^{N}w_{j}\phi(r_{j}), \tag{35}\]
with:
\[w_{j}=\frac{d(r_{I},r_{j})^{-p}}{\sum_{j=1}^{N}d(x_{I},r_{j})^{-p}}, \tag{36}\]
where \(d(r_{I},r_{j})\) is the distance between points \(I\) and \(j\) and \(p\) is a free parameter typically set to \(p=2\). Note that:
\[\sum_{j}w_{j}=1. \tag{37}\]
In order to obtain good precision, the field reconstruction at image points considers all fluid nodes neighboring \(I\) such that:
\[d(r_{I},r_{j})\leq 4\delta r, \tag{38}\]
which comes at the additional cost of a wider data exchange layer between cores during parallelization.
Note that terms involving second-order derivatives such as the diffusion term in the energy and species balance equations also require an interpolation/reconstruction process on the diffusion coefficient. To avoid non-physical values, instead of using the previously computed properties, the coefficients at the ghost nodes are computed by applying the interpolation/reconstruction procedure directly to the transport properties.
## IV Validations and results
### Premixed laminar flame acceleration in 2-D channels
The proper interaction of flames with different wall boundary conditions (isothermal, adiabatic, known heat flux) while enforcing the no-slip condition for the flow is probably the most important step when extending a combustion solver to porous media applications. To that end, the propagation of premixed flames in narrow 2-D channels is first considered to verify that the proposed solver correctly captures the different flame front regimes.
Two configurations are considered: (a) Adiabatic and (b) constant-temperature channel walls. Given that the width of the channel, here written \(H\), plays an important role to control flame front shape, heat exchange, as well as propagation speed, different cases with different channel widths have been computed. All configurations involve 2-D channels of height \(H\) and length \(L=20H\). At the inflow (left end of the domain) a stoichiometric mixture of methane/air at temperature \(T_{\text{in}}=300\) K is injected. The flow rate is dynamically set throughout all simulations to match the flame propagation speed, so as to ensure a globally static flame front within the numerical domain. The top and bottom boundaries are set to no-slip walls with either constant temperature, i.e. \(T_{w}=T_{\text{in}}\), or adiabatic boundary conditions for the temperature field. At the outlet a constant-pressure boundary condition is used. Note that for the inlet a 2-D Poiseuille distribution satisfying the target mass flow rate is implemented. To initialize all simulations profiles from the steady solution of a 1-D methane/air flame with the flame placed half-way
Figure 1: Illustration of the ghost/image node approach for boundary conditions. The red point \(G\) is outside the flow domain while green points are inside.
in the domain are used and supplemented with the velocity distribution at the inlet.
1-D free flame propertiesAs a first step pseudo 1-D free flame simulations were run both using ALBORZ (coupled to Cantera) or Cantera (as standalone tool) using the BFER-2 two-step kinetic mechanism[7]. The results obtained with both codes have been compared, as illustrated in Fig. 2; the agreement is perfect for all species and all quantities. For this case, experimental measurements led to a flame propagation speed of \(S_{F}=0.404\) m/s[7], in excellent agreement with both solvers; ALBORZ predicts a laminar flame speed of 0.408 m/s.
Furthermore, to have a clear indication regarding resolution requirements, the thermal thickness \(\delta_{T}\) defined as:
\[\delta_{T}=\frac{T_{\mathrm{ad}}-T_{\mathrm{in}}}{dT/dx}\,, \tag{39}\]
where \(T_{\mathrm{ad}}\) is the adiabatic flame temperature, was also computed. Simulations with ALBORZ led to \(\delta_{T}=328\)\(\mu\)m, which is in very good agreement with the value reported in [40]. This indicates that for fully resolved simulations one should implement \(\delta r<35\)\(\mu\)m, in order to get 10 grid points within the flame front. For all channel simulations conducted in the present section \(\delta r=20\)\(\mu\)m has been set. While larger grid-sizes would be sufficient for resolved simulations, as will be seen in next section, here we use a smaller grid-size to properly resolve the width of the smaller channel. Considering additionally the characteristic speed in the system the time-step size was then fixed to \(\delta t=7.5\times 10^{-8}\) s, also satisfying all stability conditions regarding Fourier and CFL numbers for the hybrid solver.
Adiabatic wallsFor the first set of simulations the walls are set to be adiabatic. Three different channel widths are considered, i.e. \(H\in\{0.4,\,1,\,3\}\)mm. Simulations were conducted until the system reached steady state. Then, flame propagation speeds computed from the mass flow rate as well as flame shapes were extracted. The results are compared to those from [40] for validation.
Starting from channels with widths comparable to the flame thickness (top part of Fig. 3), deformations of the flame front due to the Poiseuille velocity profile are minimal. As the channel grows in width (from top to bottom in Fig. 3) one observes more and more pronounced deformations at the center of the channel, effectively increasing the surface of the flame front. With more elongated flame surfaces one would expect changes in the propagation speed of the flame. The flame propagation speeds as a function of channel width are shown in Fig. 4 and again compared to reference data from [40]. As a first observation it is seen that the present solver matches reference data very well. Furthermore, as expected from the changes in flame shape the flame propagation speed also increases with increased channel width, reaching speeds up to three time the laminar flame speed for \(H=3\) mm.
Isothermal wallsA second second set of simulations were then carried out while setting the wall boundary conditions to isothermal at \(T_{w}=300\) K. As for adiabatic walls, three different channel widths were considered, i.e. \(H\in\{2.47,\,3,\,6\}\)mm. These channel widths were selected to cover the main flame shapes occurring for this configuration as expected from the literature, i.e. parabolic and tulip profile. The results obtained with ALBORZ are compared to simulations
Figure 3: Comparison of flame shape obtained for adiabatic walls from simulations with ALBORZ (top half of each subfigure) to results from [40] (bottom half of each subfigure), with increasing channel height from top to bottom. The colors show the heat release rate, while the iso-contours (black in the top part), red in the bottom part) represent the following isotherms: \(\theta=\frac{T-T_{\mathrm{in}}}{T_{\mathrm{ad}}-T_{\mathrm{in}}}\in\{0.1,0.3, 0.5,0.7,\,0.9\}\). Reference images (bottom half of each subfigure) are reproduced from [40]. Channel widths are set to be true to scale.
Figure 2: Validation of stoichiometric methane/air flame against reference solver: The dashed lines are from Cantera, while the markers have been computed using ALBORZ.
reported in [40] in Fig. 5.
The results show good agreement with each other. Minor differences between results from ALBORZ and from [40] can, at least in part, be attributed to the fact that a two-step chemical mechanism is employed here, while [40] rely on a single-step, global mechanism. The propagation speeds were also extracted and compared to [40], as shown in Fig. 6.
The agreement is observed to be very good for this quantity. Different from adiabatic walls where as channel width went down flame propagation speed converged to the free flame propagation speed, here as the channel width decreases the flame propagation speed goes below the free flame speed. This can be explained by the fact that lowering the channel width increases the energy loss toward the cold walls, compared to the energy released by the flame. It is also observed that at \(H\) below 3 mm the flame propagation speed drops sharply; this corresponds to the onset of flame quenching discussed in the next paragraph.
Dead space and onset of quenchingA closer look at Figs. 3 and 5 shows that the flame front hangs on to the walls for the adiabatic cases; On the other hand, for the isothermal cases there is a layer close to the walls where the flame is extinguished due to excessive heat losses, and fresh gas flow through; this zone is referred to as the dead zone [40], as illustrated in Fig. 7. Here, the quantity \(\delta_{\text{dead}}\) is introduced as the minimum thickness of the dead zone by monitoring the peak of heat release. To do that the position along the \(x\)-axis where the distance between the reaction front (marked by maximum of heat release) and the wall is minimum is found, and the corresponding distance along the normal to the wall is extracted. These values have been computed for four different cases (for the same widths as in the previous paragraph, and additionally for \(H=2.1\) mm). The results obtained with ALBORZ agree once more well with data from [40]. It is observed that for large channel widths the dead zone thickness reaches a lower plateau at a value of \(\delta_{\text{dead}}\approx 0.4\) mm. As the channel width goes down the dead zone thickness experiences a rapid
Figure 4: Comparison of flame propagation speed obtained for adiabatic walls from simulations with ALBORZ to results from [40] for different channel widths. Red circular markers are ALBORZ results while the black dashed line is data from [40].
Figure 5: Comparison of flame shape obtained for isothermal walls obtained from simulations with ALBORZ (right half of the figure) to results from [40] (left half of the figure), with increasing channel height from top to bottom. The colors show the heat release rate on the right side, while the iso-contours in black in the right part represent the following isotherms: \(\theta=\frac{T-T_{\text{th}}}{T_{\text{th}}-T_{\text{th}}}\in\{0.1,0.3,0.5,0.7, 0.9\}\). Reference images showing iso-contours on the left (top half: heat release; bottom half: temperature) are reproduced from [40]. Channel widths are set to be true to scale.
Figure 6: Comparison of flame propagation speed for isothermal walls obtained from simulations with ALBORZ to results from [40] for different channel widths. Red circular markers are ALBORZ results while the black dashed line is data from [40].
growth until the point where it becomes comparable to the channel width, so that the flame can not maintain itself anymore; this is called the quenching channel width. Calculations with ALBORZ led to a value of \(H\) between 2 and 2.1 mm for the quenching width, while [40] reported \(H=2.4\) mm. The difference between the two results can be probably attributed to the different chemical schemes employed, and grid- resolution, as reference uses an adaptive grid refinement procedure leading to grid sizes of 12.5 \(\mu\)m in the diffusion and reaction layers; at such scales the slightest differences in laminar flame speed and thickness can have a pronounced effect on the flame/wall interaction dynamics.
### Methane/air premixed flame in pseudo 2-D reactor with cylindrical obstacles
The next case considered in this work is that of a pseudo-2D packed bed burner presented in [41]. It has been designed by colleagues at the University of Magdeburg in the Thermodynamics Group with the aim to replicate flow physics found in industrial packed beds by incorporating relevant size, geometry, and boundary conditions. For all details regarding design and measurement apparatus the interested readers are referred to [41]. The overall geometry of the reactor, as initially intended, is illustrated in Fig. 9 in a vertical cut-plane through the center of the cylinders; it consists of a slit burner placed below a bed of cylindrical "particles". The rows of cylinders are arranged in an alternating pattern, with each consecutive row offset by precisely half the center-to-center distance. Most of the injected fuel/air mixture enters the packing between the two central cylinders of the first row, which are aligned with the slit burner.
The configuration considered involves a premixed methane/air mixture at equivalence ratio of one (stoichiometry) coming in from the central inlet at speed 0.3 m/s, and air coming in from the two side inlets at the same speed to reduce the possible impact of external perturbations. All incoming fluxes are at temperature \(25^{\circ}\)C. All cylinders except three of them, the two central cylinders in the bottom-most row and the central cylinder in the middle row - i.e., the three cylinders directly above fuel inlet, are associated to adiabatic no-slip walls as boundary conditions. The three remaining, central cylinders (the ones shown for instance in Fig. 10) are set to constant-temperature no-slip walls at \(T_{\rm w}=373.15\) K \(=100^{\circ}\)C, since they are thermostated at this particular temperature in the experiments. It should be noted that the measured temperatures in the experiment actually led to temperatures of \(105\pm 1^{\circ}\)C for the side cylinders and \(120^{\circ}\)C for the top central cylinder, which also might explain some of differences between simulation and experimental results. The simulations are conducted with resolutions \(\delta r=0.05\) mm and \(\delta t=0.1\)\(\mu\)s.
Before looking at the steady position/shape of the flame and compare to experimental measurements, it is interesting to look at the unsteady evolution of the flame front and interpret
Figure 8: Comparison of dead zone minimum thickness for isothermal walls obtained from simulations with ALBORZ to results from [40] for different channel widths. Red circular markers are ALBORZ results while the black dashed line is data from [40].
Figure 7: Illustration of dead zone minimum thickness for case with isothermal walls at \(T_{\rm w}=300\) K for \(H=2.5\) mm. The right figure shows the flame in the channel (temperature field). The left figure corresponds to a cut through it along the dashed black line, plotting heat release (in black) and temperature (in red), together with the dead-zone limit (dashed blue line).
Figure 9: Geometry of the pseudo 2-D burner with cylindrical obstacles of [41].
these results based on the flame shapes discussed in the previous section. The flame evolution in the simulations is shown in Fig. 10.
The sequence of images present the flame front (described here by the temperature field) retracting along the positive \(y\)-direction going upward from the narrow gap between the two central cylinders in the bottom row toward the wider, inter-particle space located in-between the three isothermal cylinders facing the injection. In the narrowest cross-section (top-left image in Fig. 10) the flame shows a parabolic shape. As it moves further downstream, the center flattens and eventually goes toward a tulip shape (even better visible in Fig. 11, left, showing heat release). Noting that at the narrowest section the equivalent channel width is 2.3 mm, it can be seen that the behavior of the flame agrees qualitatively with that shown in Fig. 5(top) for the straight channel. At the widest section, i.e. for the bottom right snapshot in Fig. 10, \(H\approx 3.5\) mm. Referring again to the channel results discussed in the previous section, the flame front should be between a flattened parabola and a tulip (between middle and bottom row of Fig. 5), which is in good agreement with Figs. 10 and 11 - keeping in mind that the wall geometries are different in the channel and in the 2-D burner configurations.
Furthermore, as for the channel with isothermal cold walls, the flame front exhibits a clear dead zone in regions neighboring the walls in Fig. 10, perhaps even better visible in Fig. 11(left). The flame front, as obtained from simulation, has been compared to experimental observations reported in [42] in Fig. 11. In the experiments, the flame front is located at about 3.5 mm above the center of the first row of cylinders along the central vertical line, while in simulations it stabilizes at approximately 2.8 mm. Furthermore, experimental measurements point to an asymmetrical flame front. This missing symmetry, as noted in [42], might be possibly explained by small inaccuracies in the actual geometry of the burner compared to the design shown in Fig. 9. To verify this point another simulation was carried out considering the finally _measured_ geometry of the real set-up as reported in [42]. The resulting flow field is illustrated via streamlines in Fig. 12. The streamlines at steady state show indeed a slightly asymmetrical flow configuration, especially in the region above the first row of cylindrical obstacles. Note that while the flow is unsteady above the bed, it reaches a steady configurations within.
The distribution of velocity and temperature in the full burner geometry is shown in Fig. 13.
Th effect of the asymmetry in the flow field is better visible when looking at the flame front, shown in Fig. 14. Figure 14 shows that the asymmetrical flame shape observed in the experiments is better reproduced in the hybrid simulation when taking into account the really measured geometry. In particular, the flame becomes tilted, from top left to bottom right. Furthermore the flame stabilizes at a higher position, at 3.1 mm, matching better the ex
Figure 11: Illustration of flame front (right) reported in [42] from experiments compared to (left) simulations with ALBORZ for the geometry shown in Fig. 9. The numerical image on the left shows the heat release rate, while the experimental image on the right captures all spontaneous emissions from species below 550 nm.
Figure 12: Flow structures illustrated by streamlines at steady state as obtained with ALBORZ based on the really measured geometry of the burner with cylindrical obstacles [42].
Figure 10: Evolution of the flame front as a function of time from top left to bottom right (corresponding to final steady state) in the configuration of Fig. 9, illustrated via the temperature field.
perimental observations. The remaining discrepancy can be explained by different factors: minor differences in temperatures of iso-thermal cylinders as used in the simulation and as measured in experiments; non-homogeneous velocity and turbulence profiles at the inlet; and - regarding simulations - the simplicity of the chosen chemical scheme BFER-2, at the difference of a complete reaction mechanism. On top of this, while for simulations heat release was used to track the position of the flame front, experimental images contain spontaneous emissions from all species radiating below 550 nm, which is known to lead to a thicker flame front with deviations of the order of 0.1-1 mm regarding flame position toward the burnt gas region, i.e., here in streamwise direction, toward the top. Defining exactly the flame front has always been a challenge, since many different definitions are possible [43]; this is even more true in experiments, considering that heat release can generally not be measured directly [44]. Keeping these points in mind, the agreement between experimental measurements and numerical results appears to be good. The obtained results already show a reasonable agreement between ALBORZ and measurement data, demonstrating that the numerical solver can well capture flow/flame/wall interactions. More detailed comparisons between experimental and numerical data will be the topic of future studies involving systematic parameter variations, and relying on additional quantities for the comparisons as soon as they have been measured experimentally.
### Pore-resolved flame simulation in randomly generated porous media
As a final configuration and to illustrate the applicability of the solver to more complex configurations, a geometry generated in the Porous Microstructure Analysis (PuMA) software [7] composed of randomly placed non-overlapping spheres with a diameter of 1.6 mm, a global porosity of 0.7 and a physical domain size \(L\times H\times H\) with \(L=0.08\) m and \(H=0.005\) m is considered. The geometry is illustrated in Fig. 15.
Here \(L_{1}=0.01\) m and \(L_{2}=0.02\) m. For this simulations the grid- and time-step sizes are set at the same values as in the previous configuration. Periodic boundary conditions are used for the top and bottom of the simulation domain. A constant mass flow rate boundary condition is used for the inflow (on the right), where the pressure and temperature are set to 1 atm and 298.15 K. At the inflow, the species mass fractions are set to that of the fresh gas at equivalence ratio 1. At the other end of the domain a constant hydrodynamic pressure along with zero-gradient boundary conditions for species and temperature field are used. During the simulation the total consumption speed of methane is monitored via:
\[S_{c}=\frac{\int_{V}\dot{\omega}_{\text{CH}_{4}}dV}{\int_{V}\dot{\omega}_{ \text{CH}_{4}}dV}, \tag{40}\]
where the consumption speed is normalized by that of a flat flame front, without any interaction with a porous media. The results are displayed in Fig. 16.
The average normalized propagation speed for this configuration is 1.797, with a large standard deviation of 0.6875. This larger propagation speed as
Figure 14: Flame shape and position illustrated via heat release as obtained from ALBORZ simulations for the really measured geometry.
Figure 13: (Left half of each subfigure) Velocity magnitude and (right half of each subfigure) temperature fields in the full burner geometry obtained with ALBORZ based on the really measured geometry of the burner with cylindrical obstacles [42]. Iso-contours are for the temperature field dividing \(T\in[300\ 2300]\) K into 10 equally-spaced intervals.
Figure 15: Illustration of randomly-generated porous media geometry.
compared to the laminar flame propagation speed is not unexpected. The flame dynamics in a porous media with adiabatic solid boundaries is mainly governed by the flame contortion as it goes over the solid obstacles. The consumption speed, in a process similar to that found for turbulent flames, is directly impacted by the increased flame surface. The evolution of the flame shape as it goes through the porous media is illustrated in Fig. 17.
## V Conclusions and discussion
In this work a numerical model previously developed for gas-phase combustion has been extended and applied to reacting flows in porous media. Benchmark cases of increasing complexity in which flame/wall interactions dominate the dynamics of the system have been considered. It was shown that the model is able to capture the different flame/wall interaction regimes for both Dirichlet (constant temperature) and Neumann (adiabatic) boundary conditions. The suitability of the proposed solver for combustion simulations within a regular particle packing was discussed in connection to a pseudo 2-D burner involving cylindrical obstacles. First comparisons to experimental data point to a good agreement. Finally, for the first time to the authors' knowledge a lattice Boltzmann-based pore-scale simulation of combustion in a complex 3-D porous media is presented. These results open the door for future studies considering flame propagation in realistic porous media and parametric studies of reacting gas flows in packed bed configurations.
## Acknowledgement
The authors acknowledge funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) in TRR 287 (Project-ID 422037413), as well as the Gauss centre for providing computation time under grant "pn73ta" on the GCS supercomputer SuperMUC-NG at Leibniz Supercomputing Centre, Munich, Germany. Additionally, the authors thank Mohammadassan Khodsiani, Benoit Fond and Frank Beyrau for interesting discussions regarding experimental measurements in the 2-D burner.
| 混合型数値モデルは、燃焼シミュレーションのために以前開発され、この論文では、そのモデルを、多孔質媒体における炎の伝播と安定化を記述する目的で拡張しています。このモデルは、特に炎と壁の相互作用プロセスに焦点を当てているため、熱伝導率が低い壁を持つチャネルにおける炎の伝播に関する対応するベンチマークを介して検証されます。異なるチャネル幅のシミュレーションの結果、このモデルは、冷壁を持つチャネルにおいて炎の形の変化、伝播速度、死域、そして、 quenching 限界を正確に捕捉することが示されています。このモデルは、温度が一定の 2-D 多孔質バーナーを考慮した追加的な評価に含められています。このバーナーは、回転軸が一定の温度で配置されている、回転軸の配列を含む、関連する実験的研究において検討されています。さらに、このモデルは、ランダム |
2304.13035 | Tuning the separability in noncommutative space | We study the Separability of the noncommutative (NC) space coordinate degrees
of freedom with the generalized Peres-Horodecki separability criterion (Simon's
condition) for a bipartite Gaussian state. Non-symplectic nature of the
transformation between the usual commutative space and NC space restricts the
use of Simon's condition in NCS. We transform the NCS system to an equivalent
Hamiltonian in commutative space through Bopp shift, which enables the
utilization of the separability criterion in NC space. For afairly general
study, we consider a bilinear Hamiltonian with time-dependent (TD) parameters,
along with a TD external interaction, which is linear in field modes. The
system is transformed into canonical form keeping the intrinsic symplectic
structure ($Sp(4,\mathbb{R})$) intact. The solution of the TD-Schr\"{o}dinger
equation is obtained with the help of Lewis-Riesenfeld invariant method (LRIM).
Expectation values of the observables (thus the covariance matrix ) are
constructed from the states obtained from LRIM. It turns out that the existence
of the NC parameters in the oscillator determines the separability of the
states. In particular, for isotropic oscillators, the separability condition
for the bipartite Gaussian states depends on NC parameters. Moreover,
anisotropic parameter values for the oscillator affects the separability. In
other words, both the deformation parameters ($\theta,\;\eta$) and parameter
values of the oscillator are important for the separability of bipartite
states. Thus tuning the parameter values, one can destroy or recreate the
separability of states. With the help of toy models, we have demonstrated TD-NC
space parameters effect on separability. | Pinaki Patra | 2023-04-24T20:40:55 | http://arxiv.org/abs/2304.13035v2 | # Tuning the separability in noncommutative space
###### Abstract
With the help of the generalized Peres-Horodecki separability criterion (Simon's condition) for a bipartite Gaussian state, we have studied the separability of the noncommutative space (NCS) coordinate degrees of freedom. Non-symplectic nature of the transformation between the usual commutative space and NCS restricts the straightforward use of Simon's condition in NCS. We have transformed the NCS system to an equivalent Hamiltonian in commutative space through the Bopp shift, which enables the utilization of the separability criterion.
To make our study fairly general and to analyze the effect of parameters on the separability of bipartite state in NC-space, we have considered a bilinear Hamiltonian with time-dependent (TD) parameters, along with a TD external interaction, which is linear in field modes. The system is transformed (\(Sp(4,\mathbb{R})\)) into canonical form keeping the intrinsic symplectic structure intact. The solution of the TD-Schrodinger equation is obtained with the help of the Lewis-Riesenfeld invariant method (LRIM). Expectation values of the observables (thus the covariance matrix ) are constructed from the states obtained from LRIM.
It turns out that the existence of the anisotropy in the oscillator determines the separability of the states. In particular, for an isotropic oscillator, the bipartite states are always separable, whereas particular anisotropic parameter values may cease the separability. Thus tuning the parameter values, one can destroy or recreate the separability of states. With the help of a toy model, we have demonstrated how the tuning of a TD- NCS parameter affects the separability.
Entanglement; Separability criterion; Lewis-Riesenfeld method; Time-dependent system; Noncommutative space
Introduction
Perhaps the most "genuine"quantum property that a physical system may possess is "entanglement", which underlines the intrinsic order of statistical relations between subsystems of a compound quantum system [1; 2; 3]. It occurs in composite systems as a consequence of the quantum superposition principle of states, and of the fact that the Hilbert space that describes a composite quantum system is the tensor product of the Hilbert spaces associated with each subsystem [2; 3; 4; 5]. In particular, if the entangled subsystems are spatially separated nonlocality properties may arise, showing a very deep departure from classical physics [6]. In other words, entangled states are inseparable. Initial successful study of necessary and sufficient conditions for separability in the \(2\times 2\) and \(2\times 3\) dimensional Hilbert space was done by Peres and Horodecki [7; 8]. Subsequently, a flood of works in separability conditions for finite-dimensional Hilbert space, more specifically the qubits, have enriched the literature of quantum physics (for a pedagogical survey, see [9]). Simon provided the generalization of the Peres-Horodecki criterion for the separability of a given state of a bipartite canonical continuous system through the variance (correlation) matrix [10]. The generalized Peres-Horodecki criterion relies on the basic differences between classical and quantum covariance/noise matrices [10; 11]. One of the advantages of Simon's approach is that it is easy to implement in present-day experimental capacity, in particular, the experimental realization of quantum teleportation of coherent states [12; 13; 14].
For a classical probability distribution over a classical \(2n\)-dimensional phase space, any \(2n\times 2n\) real symmetric positive-definite matrix is a bonafide, that is, physically realizable, covariance matrix [15]. For a quantum system, however, the covariance matrix (\(\hat{\mathcal{V}}_{c}\)) has to satisfy certain additional, namely Robertson-Schrodinger uncertainty principle (RSUP) (\(\hat{\mathcal{V}}_{c}+\frac{i}{2}\hbar\hat{J}\geq 0\)) in the Heisenberg sense, where the symplectic matrix \(\hat{J}\) encodes the fundamental commutation relations between co-ordinates (\(\hat{q}_{k}\)) and conjugate momentum (\(\hat{p}_{k}\)) [15; 16; 17; 18; 19; 20; 21]. We shall call this background space (position operators commute with each other) as usual commutative space.
There is a consensus that the space-time structure is deformed in such a manner that the usual notion of commutative space will be ceased at energy scales on the order of \(10^{19}\) GeV, i.e., at the energy scale in which the theories of quantum gravity appear to predict departures from classical gravity [22; 23; 24; 25]. It is almost a Gospel that the fundamental concept of
space-time is mostly compatible with quantum theory in NC-space [26; 27]. Therefore, extending ideas of the commutative space quantum mechanics to noncommutative (NC)-space is always an interesting aspect in its own right [28; 29; 30; 31; 32; 33; 34]. In this paper, we study entanglement between coordinate degrees of freedom induced by both the position-position and momentum-momentum noncommutativity parameters, as well as the anisotropy due to the mass and frequency of a quantum mechanical oscillator. In this connection, we would like to mention that the study of noncommutative parameter induced entanglement for anisotropic oscillators are available in literature [35; 36; 37; 38]. Present study is distinct from the earlier study in literature, for its consideration of TD-parameters, including the TD-NCS parameters, so that one can envisage the evolution of NC-space by tuning TD-parameters (e.g., TD-magnetic field) for an equivalent experiment in laboratory.
Since the connection (Darboux transformation) between the commutative space and NC-space is not symplectic, the direct use of generalized RSUP (\(\hat{\bar{\mathcal{V}}}_{nc}+\frac{i}{2}\hbar_{e}\hat{\bar{J}}\geq 0\)) is not immediate to predict the separability of the bipartite Gaussian state in NC-space [39; 40]. Nonetheless, we can transform the NC-space system into an equivalent system with commutative space operators through Bopp's shift (Darboux transformation) and use the formalism of usual quantum mechanics, including RSUP [41; 42; 43; 44].
One of the major goals of the present paper is to analyze the effect of parameters in the separability criterion of bipartite state in NC-space. To make our study fairly general, as well as to investigate the experimental possibility by tuning the parameters, we have considered the time-dependent (TD) parameters. In particular, we have considered a TD-anisotropic harmonic oscillator, placed in an external TD-interaction, which is linear in field modes. From a practical point of view, this is similar to an anisotropic oscillator with a linear external perturbation. For instance, an anisotropic oscillator placed in a TD- field in 2-dimensional non-commutative space, dynamics of a charged particle inside a TD- magnetic field, and Chern-Simon's model- all share a structural similarity in their Hamiltonians [45; 46; 47; 48; 49; 50; 51]. The efficacious application of TD-harmonic oscillator (TDHO) in various domains of Physics, makes the study of TDHO a topical one [46; 47; 48; 49; 50; 51].
The time-dependent system is solved with the help of the Lewis-Riesenfeld (LR) phase-space invariant method [52; 53; 54; 55; 56; 57; 58; 59; 60; 61], which states that for a system described by a TD Hamiltonian \(H(t)\), a particular solution of the associated Schrodinger equation (SE), is given by the eigenstate (\(|n,t\rangle\)) of a TD-invariant operator \(\mathcal{I}(t)\) defined by \(\partial_{t}\mathcal{I}+(i\hbar)^{-1}[\mathcal{I},\hat{H}]=0\), apart from
a TD-phase factor \(\exp(i\phi_{n}(t))\), where \(\phi_{n}(t)=\hbar^{-1}\int_{0}^{t}\langle n,\tau|[i\hbar\partial_{t}-\hat{H}(t)]|n,\tau\rangle d\tau.\) The general solution of the SE is given by the superposition state \(|\psi(t)\rangle=\sum_{n}c_{n}\exp(i\phi_{n}(t))|n,t\rangle\), with \(c_{n}=\langle n|\psi(0)\rangle\). It turns out that the states of the system are displaced number states [62; 63; 64] along with the TD-phase factor. The exact form of dynamical, as well as geometrical phases, are then determined. The variance (noise/correlation) matrix is then constructed through the expectation values. Since, the separability of Gaussian states, may be well characterized through the variance matrix, we have utilized the separability criteria for a bonafide variance matrix. This leads to some restrictions on the TD- parameters to the bipartite state being separable. We have shown that the anisotropy of the system is responsible for the entanglement. Thus the separability of the NC-space coordinate degrees of freedom can be destroyed by suitable tuning of the time-dependent parameters. We have analyzed this for a toy model, in which the position-position noncommutative parameters are considered to be TD, along with a TD-effective mass with anisotropy.
The organization of the paper is the following. Section II corresponds to a general study of the separability criterion in NC-space. The Hamiltonian is put into a canonical form maintaining the \(Sp(4,\mathbb{R})\) invariance in section III. The time-dependent system is solved with the help of the Lewis-Riesenfeld invariant method in section IV. The separability criterion through the variance matrix is studied in section V, which is followed by a toy model in section VI. At last the conclusions of our study are drawn.
## II Separability criterion in noncommutative space
If we define a coordinate vector as
\[\tilde{X}=(\hat{\tilde{X}}_{1},\hat{\tilde{X}}_{2},\hat{\tilde{X}}_{3},\hat{ \tilde{X}}_{4})^{T}=(\hat{\tilde{x}}_{1},\hat{\tilde{p}}_{1},\hat{\tilde{x}}_ {2},\hat{\tilde{p}}_{2})^{T}, \tag{1}\]
then the commutation relations of the observables in NC-space may be represented with a deformed symplectic matrix \(\tilde{\Sigma_{y}}\), given by
\[[\hat{\tilde{X}}_{\alpha},\hat{\tilde{X}}_{\beta}]=i\hbar_{e}\tilde{J}_{ \alpha\beta}=-(\tilde{\Sigma}_{y})_{\alpha\beta}, \tag{2}\]
where \(\tilde{J}_{\alpha\beta}\) is the \(\alpha\beta^{th}\) element of
\[\hat{\tilde{J}}=\left(\begin{array}{cc}\hat{J}_{2}&\frac{1}{\hbar_{e}} \hat{\Pi}_{\theta_{\eta}}\\ -\frac{1}{\hbar_{e}}\hat{\Pi}_{\theta_{\eta}}&\hat{J}_{2}\end{array}\right), \ \mbox{with}\ \hat{\Pi}_{\theta\eta}=\left(\begin{array}{cc}\theta&0\\ 0&\eta\end{array}\right),\ \hat{J}_{2}=\left(\begin{array}{cc}0&1\\ -1&0\end{array}\right). \tag{3}\]
The effective Planck constant \(\hbar_{e}\) is related with the position postion NC-parameter \(\theta\) and momentum-momentum NC-parameter \(\eta\) through the Planck constant \(\hbar\) as
\[\hbar_{e}=\hbar(1+\frac{\theta\eta}{4\hbar^{2}}). \tag{4}\]
In (1) \(X^{T}\) represents the matrix transposition of \(X\). The commutation relations for the usual commutative space
\[X=(\hat{X}_{1},\hat{X}_{2},\hat{X}_{3},\hat{X}_{4})^{T}=(\hat{x}_{1},\hat{p}_{ 1},\hat{x}_{2},\hat{p}_{2})^{T} \tag{5}\]
are given by
\[[\hat{X}_{\alpha},\hat{X}_{\beta}]=i\hbar\hat{J}_{\alpha\beta}=-\hbar(\hat{ \Sigma}_{y})_{\alpha\beta}, \tag{6}\]
with
\[\hat{J}=\mbox{diag}(\hat{J}_{2},\hat{J}_{2}),\;\;\hat{\Sigma}_{y}=\mbox{diag} (\hat{\sigma}_{y},\hat{\sigma}_{y}). \tag{7}\]
Unless otherwise specified, in the present paper, we shall represent Pauli matrices as follows.
\[\hat{\sigma}_{x}=\left(\begin{array}{cc}0&1\\ 1&0\end{array}\right),\;\hat{\sigma}_{y}=\left(\begin{array}{cc}0&-i\\ i&0\end{array}\right),\;\hat{\sigma}_{z}=\left(\begin{array}{cc}1&0\\ 0&-1\end{array}\right). \tag{8}\]
The NC-space co-ordinates (\(\tilde{X}\)) are connected to the usual commutative space co-ordinates (\(X\)) through the Darboux transformation (\(\hat{\Upsilon}_{D}\)) given by the Bopp's shift
\[\tilde{X}=\hat{\Upsilon}_{D}X, \tag{9}\]
with
\[\hat{\Upsilon}_{D}=\left(\begin{array}{cc}\hat{\mathbb{I}}_{2}&-\frac{1}{2 \hbar}\hat{\Pi}_{\theta\eta}\hat{J}_{2}\\ \frac{1}{2\hbar}\hat{\Pi}_{\theta\eta}\hat{J}_{2}&\hat{\mathbb{I}}_{2}\end{array} \right), \tag{10}\]
where \(\hat{\mathbb{I}}_{n}\) stands for \(n\times n\) identity matrix. From physical ground, since the position-position and momentum momentum non-commutativity will appear in much higher energy scale, we have \(\theta<\hbar\) and \(\eta<\hbar\), hence the determinant of \(\hat{\Upsilon}_{D}\) is nonzero (\(\Delta_{\Upsilon_{D}}\neq 0\)); i.e., \(\hat{\Upsilon}_{D}\in GL(4,R)\). One can see that the symplectic matrix \(\hat{J}\) is connected with the deformed symplectic matrix \(\hat{\hat{J}}\) through the following transformation via \(\hat{\Upsilon}_{D}\).
\[\hbar_{e}\hat{\hat{J}}=\hbar\hat{\Upsilon}_{D}\hat{J}\hat{\Upsilon}_{D}^{T}. \tag{11}\]
Since the quantum mechanical formalism are well established in commutative space, it is customary to convert the NC-space system in usual commutative space system through (9) for computational purpose. For instance, one can define following matrix elements of a covariance matrix \((\hat{\hat{\mathcal{V}}}_{nc})\) for NC-space co-ordinates.
\[\tilde{\mathcal{V}}_{\alpha\beta}=\frac{1}{2}\langle\{\hat{\hat{X}}_{\alpha}, \hat{\hat{X}}_{\beta}\}\rangle-\langle\hat{\hat{\hat{X}}}_{\alpha}\rangle \langle\hat{\hat{X}}_{\beta}\rangle;\;\alpha,\beta\in\{1,2,3,4\}, \tag{12}\]
where the expectation value \(\langle\hat{\chi}\rangle\) of the operator \(\hat{\chi}\) is evaluated over the state in the usual commutative space variables. One can see that the commutative space covariance matrix \(\hat{\mathcal{V}}_{c}\), defined by
\[\mathcal{V}_{\alpha\beta}=\frac{1}{2}\langle\{\hat{X}_{\alpha},\hat{X}_{\beta }\}\rangle-\langle\hat{X}_{\alpha}\rangle\langle\hat{X}_{\beta}\rangle;\; \alpha,\beta\in\{1,2,3,4\}, \tag{13}\]
is connected with \(\hat{\hat{\mathcal{V}}}_{nc}\) through the following transformation via \(\hat{\Upsilon}_{D}\).
\[\hat{\hat{\mathcal{V}}}_{nc}=\hat{\Upsilon}_{D}\hat{\mathcal{V}}_{c}\hat{ \Upsilon}_{D}^{T}. \tag{14}\]
Due to the fundamental commutation relations (6) of commutative space, all the physically realizable variance matrix must satisfy the Robertson-Schrodinger uncertainty principle (RSUP), which states that \(\hat{\mathcal{V}}_{c}+\frac{i}{2}\hbar\hat{J}\) has to be positive definite; i.e.,
\[\hat{\mathcal{V}}_{c}+\frac{i}{2}\hbar\hat{J}\geq 0. \tag{15}\]
Using (11) and (14), one can see that \(\hat{\hat{\mathcal{V}}}_{nc}+\frac{i}{2}\hbar_{e}\hat{\hat{J}}\) is connected with \(\hat{\mathcal{V}}_{c}+\frac{i}{2}\hbar\hat{J}\) through \(\hat{\Upsilon}_{D}\) as
\[\hat{\hat{\mathcal{V}}}_{nc}+\frac{i}{2}\hbar_{e}\hat{\hat{J}}=\hat{\Upsilon} _{D}\left(\hat{\mathcal{V}}_{c}+\frac{i}{2}\hbar\hat{J}\right)\hat{\Upsilon} _{D}^{T}. \tag{16}\]
A bonafide covariance matrix in NC-space has to satisfy the RSUP
\[\hat{\hat{\mathcal{V}}}_{nc}+\frac{i}{2}\hbar_{e}\hat{\hat{J}}\geq 0. \tag{17}\]
From (13), one can note that \(\hat{\mathcal{V}}_{c}\) can be written in the block form
\[\hat{\mathcal{V}}_{c}=\left(\begin{array}{cc}V_{11}&V_{12}\\ V_{12}^{T}&V_{22}\end{array}\right), \tag{18}\]
A generic local transformation \(\hat{S}_{1}\bigoplus\hat{S}_{2}\), acts on \(\hat{\mathcal{V}}_{c}\) as
\[\hat{V}_{jj}\rightarrow\hat{S}_{j}\hat{V}_{jj}\hat{S}_{j}^{T},\;\hat{V}_{12} \rightarrow\hat{S}_{1}\hat{V}_{12}\hat{S}_{2}^{T};\;\mbox{with}\;\hat{S}_{j} \in Sp(2,\mathbb{R}),\;j=1,2. \tag{19}\]
One can identify that following four quantities are local invariant with respect to transformation belonging to the \(Sp(2,\mathbb{R})\bigotimes Sp(2,\mathbb{R})\subset Sp(4,\mathbb{R})\).
\[\Delta_{j}=Det(V_{jj}),\;\Delta_{12}=Det(V_{12}),\;\Delta_{\mathcal{ V}_{c}}=Det(\mathcal{V}_{c}) \tag{20}\] \[\tau_{v}=Trace(V_{11}J_{2}V_{12}J_{2}V_{22}J_{2}V_{12}^{T}J_{2}). \tag{21}\]
Williamson's theorem [65, 66] allows us to choose \(\hat{S}_{j}\) such that
\[\hat{\mathcal{V}}_{c}\rightarrow\left(\begin{array}{cc}W_{11}&E_{12}\\ E_{12}^{T}&W_{22}\end{array}\right), \tag{22}\]
where
\[W_{jj}=\hat{S}_{j}\hat{V}_{jj}\hat{S}_{j}^{T}=d_{j}\hat{\mathbb{I}}_{2},\;E_{1 2}=\hat{S}_{1}\hat{V}_{12}\hat{S}_{2}^{T}. \tag{23}\]
The eigenvalues \((d_{j})\) of \(i\hat{J}_{2}\hat{V}_{jj}\) are the symplectic eigenvalues of \(\hat{V}_{jj}\). In particular, for our present discussion
\[d_{j}=|\sqrt{\Delta_{j}}|. \tag{24}\]
What follows from the above computation is that the variance matrix can be written into the normal form
\[\hat{\mathcal{V}}_{nor}=\left(\begin{array}{cc}\sqrt{\Delta_{1}}\hat{ \mathbb{I}}_{2}&\text{diag}(\kappa_{1},\kappa_{2})\\ \text{diag}(\kappa_{1},\kappa_{2})&\sqrt{\Delta_{2}}\hat{\mathbb{I}}_{2}\end{array} \right), \tag{25}\]
where
\[\Delta_{12}=\kappa_{1}\kappa_{2},\;\Delta_{\mathcal{V}_{c}}=(\sqrt{\Delta_{1 }\Delta_{2}}-\kappa_{1}^{2})(\sqrt{\Delta_{1}\Delta_{2}}-\kappa_{2}^{2}). \tag{26}\]
Hence, one can conclude that RSUP (15) can be written as
\[\Delta_{1}\Delta_{2}+(\frac{\hbar^{2}}{4}-\Delta_{12})^{2}-\tau_{v}\geq\frac{ \hbar^{2}}{4}(\Delta_{1}+\Delta_{2}). \tag{27}\]
Under mirror reflection (Peres-Horodecki partial transpose) \(\Delta_{1}\), \(\Delta_{2}\) and \(\tau_{v}\) remains invariant; whereas, \(\Delta_{12}\) flips sign. Therefore, the requirement that the covariance matrix of a separable state has to obey the following necessary condition.
\[Ps=\Delta_{1}\Delta_{2}+(\frac{\hbar^{2}}{4}-|\Delta_{12}|)^{2}-\tau_{v}-\frac {\hbar^{2}}{4}(\Delta_{1}+\Delta_{2})\geq 0, \tag{28}\]
which turns out to be sufficient for all bipartite-Gaussian state.
Since \(\hat{\hat{J}}\) is not symplectic and \(\hat{\Upsilon}_{D}\) is not unitary, a straightforward generalization of (27) for
NC-space is not immediate, in spite of the structural similarity of (6) and (2). However, with the help of (9), we can transform the NC-space system to commutative space, for which the general separability criterion (28) is at our aid. One can observe that Schur complement
\[\hat{C}_{\Upsilon}=\hat{\mathbb{I}}_{2}-\frac{1}{2\hbar}\hat{\Pi}_{\theta \eta}\hat{J}_{2}\hat{\mathbb{I}}^{-1}(-\frac{1}{2\hbar})\hat{\Pi}_{\theta\eta} \hat{J}_{2}=(1-\frac{\theta\eta}{4\hbar^{2}})\hat{\mathbb{I}}_{2} \tag{29}\]
of \(\mathbb{I}_{2}\) is nonsingular for all physically acceptable parameter values. In particular,
\[\Delta_{C_{\Upsilon}}=\text{Det}(\hat{C}_{\Upsilon})=(1-\frac{\theta\eta}{4 \hbar^{2}})^{2}\geq\frac{9}{16};\;\forall\theta\leq\hbar,\eta\leq\hbar. \tag{30}\]
In other words, \(\hat{\Upsilon}_{D}^{-1}\) exists, and
\[\hat{\Upsilon}_{D}^{-1}=\frac{1}{\sqrt{\Delta_{C_{\Upsilon}}}}\left(\begin{array} []{cc}\hat{\mathbb{I}}_{2}&\frac{1}{2\hbar}\hat{\Pi}_{\theta\eta}\hat{J}_{2} \\ -\frac{1}{2\hbar}\hat{\Pi}_{\theta\eta}\hat{J}_{2}&\hat{\mathbb{I}}_{2}\end{array} \right). \tag{31}\]
What follows is that for NC-space covariance matrix
\[\hat{\tilde{\mathcal{V}}}_{nc}=\left(\begin{array}{cc}\tilde{V}_{11}&\tilde {V}_{12}\\ \tilde{V}_{12}^{T}&\tilde{V}_{22}\end{array}\right), \tag{32}\]
the generalized Peres-Horodecki criterion (Simon's criterion) may be studied with the help of the corresponding commuting space covariance matrix (18), with
\[\hat{V}_{11} = \frac{1}{\Delta_{C_{\Upsilon}}}(\hat{\tilde{V}}_{11}+\frac{1}{2 \hbar}(\hat{\tilde{V}}_{12}\hat{J}_{2}^{T}\hat{\Pi}_{\theta\eta}^{T}+\hat{\Pi }_{\theta\eta}\hat{J}_{2}\hat{\tilde{V}}_{12}^{T})+\frac{1}{4\hbar^{2}}\hat{ \Pi}_{\theta\eta}\hat{J}_{2}\hat{\tilde{V}}_{22}^{T}\hat{\Pi}_{\theta\eta}^{T }), \tag{33}\] \[\hat{V}_{12} = \frac{1}{\Delta_{C_{\Upsilon}}}(\hat{\tilde{V}}_{12}-\frac{1}{2 \hbar}(\hat{\tilde{V}}_{11}\hat{J}_{2}^{T}\hat{\Pi}_{\theta\eta}^{T}-\hat{\Pi }_{\theta\eta}\hat{J}_{2}\hat{\tilde{V}}_{22})-\frac{1}{4\hbar^{2}}\hat{\Pi}_ {\theta\eta}\hat{J}_{2}\hat{\tilde{V}}_{12}^{T}\hat{\Pi}_{\theta\eta}^{T}),\] (34) \[\hat{V}_{22} = \frac{1}{\Delta_{C_{\Upsilon}}}(\hat{\tilde{V}}_{22}-\frac{1}{2 \hbar}(\hat{\tilde{V}}_{12}^{T}\hat{J}_{2}^{T}\hat{\Pi}_{\theta\eta}^{T}+\hat{ \Pi}_{\theta\eta}\hat{J}_{2}\hat{\tilde{V}}_{12})+\frac{1}{4\hbar^{2}}\hat{ \Pi}_{\theta\eta}\hat{J}_{2}\hat{\tilde{V}}_{11}\hat{J}_{2}^{T}\hat{\Pi}_{ \theta\eta}^{T}). \tag{35}\]
Explicitly written
\[\hat{V}_{11}=\frac{1}{\Delta_{C_{\Upsilon}}}\left(\begin{array}{cc}\tilde{ \mathcal{V}}_{11}+\frac{\theta}{\hbar}\tilde{\mathcal{V}}_{14}+\frac{\theta^{ 2}}{4\hbar^{2}}\tilde{\mathcal{V}}_{44}&\tilde{\mathcal{V}}_{12}+\frac{1}{2 \hbar}(\theta\tilde{\mathcal{V}}_{24}-\eta\tilde{\mathcal{V}}_{13})-\frac{ \theta\eta}{4\hbar^{2}}\tilde{\mathcal{V}}_{34}\\ \tilde{\mathcal{V}}_{12}+\frac{1}{2\hbar}(\theta\tilde{\mathcal{V}}_{24}-\eta \tilde{\mathcal{V}}_{13})-\frac{\theta\eta}{4\hbar^{2}}\tilde{\mathcal{V}}_{3 4}&\tilde{\mathcal{V}}_{22}-\frac{\eta}{\hbar}\tilde{\mathcal{V}}_{23}+\frac{ \eta^{2}}{4\hbar^{2}}\tilde{\mathcal{V}}_{33}\end{array}\right), \tag{36}\]
\[\hat{V}_{22}=\frac{1}{\Delta_{C_{\Upsilon}}}\left(\begin{array}{cc}\tilde{ \mathcal{V}}_{33}-\frac{\theta}{\hbar}\tilde{\mathcal{V}}_{23}+\frac{\theta^{ 2}}{4\hbar^{2}}\tilde{\mathcal{V}}_{22}&\tilde{\mathcal{V}}_{34}-\frac{1}{2 \hbar}(\theta\tilde{\mathcal{V}}_{24}-\eta\tilde{\mathcal{V}}_{13})-\frac{ \theta\eta}{4\hbar^{2}}\tilde{\mathcal{V}}_{12}\\ \tilde{\mathcal{V}}_{34}-\frac{1}{2\hbar}(\theta\tilde{\mathcal{V}}_{24}-\eta \tilde{\mathcal{V}}_{13})-\frac{\theta\eta}{4\hbar^{2}}\tilde{\mathcal{V}}_{12 }&\tilde{\mathcal{V}}_{44}+\frac{\eta}{\hbar}\tilde{\mathcal{V}}_{14}+\frac{ \eta^{2}}{4\hbar^{2}}\tilde{\mathcal{V}}_{11}\end{array}\right), \tag{37}\]
\[\hat{V}_{12}=\frac{1}{\Delta_{C_{\Upsilon}}}\left(\begin{array}{cc}\tilde{ \mathcal{V}}_{13}-\frac{\theta}{2\hbar}(\tilde{\mathcal{V}}_{12}+\tilde{ \mathcal{V}}_{34})-\frac{\theta^{2}}{4\hbar^{2}}\tilde{\mathcal{V}}_{24}&\tilde {\mathcal{V}}_{14}+\frac{1}{2\hbar}(\theta\tilde{\mathcal{V}}_{44}+\eta \tilde{\mathcal{V}}_{11})+\frac{\theta\eta}{4\hbar^{2}}\tilde{\mathcal{V}}_{14} \\ \tilde{\mathcal{V}}_{23}-\frac{1}{2\hbar}(\theta\tilde{\mathcal{V}}_{22}+\eta \tilde{\mathcal{V}}_{33})+\frac{\theta\eta}{4\hbar^{2}}\tilde{\mathcal{V}}_{23}& \tilde{\mathcal{V}}_{24}-\frac{\eta}{2\hbar}(\tilde{\mathcal{V}}_{34}-\tilde{ \mathcal{V}}_{12})-\frac{\eta^{2}}{4\hbar^{2}}\tilde{\mathcal{V}}_{13}\end{array} \right), \tag{38}\]
where \(\tilde{\mathcal{V}}_{\alpha\beta}\) are as defined in (12). The expressions (36)- (38) can be used directly in (28) to determine the entanglement behavior, in particular the separability of the NC-coordinate degrees of freedom. In this paper, we have demonstrated the situation for a time-dependent anisotropic oscillator with a linear interaction term.
## III Time-dependent anisotropic oscillator with linear interaction with time-dependent parameters
Along with bilinear Hamiltonians, linear field modes play important role in various domains. For instance, in the Bosonization of one-dimensional cold atomic gases, collective atomic recoil laser using a Bose-Einstein condensate, linear field modes appear naturally [67; 68; 69]. To study whether these linear terms affect the covariance matrix, and hence the separability criterion, we consider the following Hamiltonian for an anisotropic oscillator with TD- parameters placed under an external TD- interaction, which is linear in coordinates, in two-dimensional NC-space.
\[\hat{H}_{nc}=\sum_{j=1}^{2}\left[\frac{1}{2m_{j}}\hat{\tilde{p}}_{j}^{2}+\frac{ 1}{2}m_{j}\tilde{\omega_{j}^{2}}\hat{\tilde{x}}_{j}^{2}+\mathcal{E}_{j}\hat{ \tilde{x}}_{j}\right], \tag{39}\]
where the effective mass (\(m(t)=(m_{1}(t),m_{2}(t))\))and frequency (\(\omega(t)=(\omega_{1}(t),\omega_{2}(t))\)) of the oscillator, as well as the parameters ( \(\tilde{\mathcal{E}}(t)=(\mathcal{E}_{1}(t),0,\mathcal{E}_{2}(t),0)\)) of the external linear interaction are TD. Using the Bopp's shift (9), we write the equivalent Hamiltonian \(\hat{H}_{c}\) in commutative space as
\[\hat{H}_{c}=\frac{1}{2}X^{T}\hat{\mathcal{H}}X+\mathcal{E}^{T}X, \tag{40}\]
where
\[\hat{\mathcal{H}}=\left(\begin{array}{cc}\hat{C}&\hat{A}^{T}\\ \hat{A}&\hat{B}\end{array}\right),\ \ \hat{A}=\left(\begin{array}{cc}0&2\nu_{1}\\ -2\nu_{2}&0\end{array}\right),\ \hat{B}=\left(\begin{array}{cc}\alpha_{2}&0\\ 0&\frac{1}{\mu_{2}}\end{array}\right),\ \hat{C}=\left(\begin{array}{cc} \alpha_{1}&0\\ 0&\frac{1}{\mu_{1}}\end{array}\right), \tag{41}\]
and
\[\mathcal{E}=(\mathcal{E}_{1},\frac{\theta}{2\hbar}\mathcal{E}_{2},\mathcal{E}_ {2},-\frac{\theta}{2\hbar}\mathcal{E}_{1})^{T}. \tag{42}\]
The TD - parameters \(\alpha_{j}(t),\nu_{j}(t),\mu_{j}(t)\) are the concise expressions of the followings.
\[\frac{1}{\mu_{j}} = \frac{1}{m_{j}}+\frac{\theta^{2}}{4\hbar^{2}}m_{l}\tilde{\omega}_ {l}^{2}|\epsilon^{jl}|, \tag{43}\] \[\alpha_{j} = m_{j}\tilde{\omega}_{j}^{2}+\frac{\eta^{2}}{4\hbar^{2}m_{l}}| \epsilon^{jl}|,\] (44) \[\nu_{j} = \frac{1}{4\hbar m_{j}}(\eta+m_{1}m_{2}\theta\tilde{\omega}_{l}^{ 2}|\epsilon^{jl}|);\ j=1,2. \tag{45}\]
We have used the notation for complete antisymmetric tensors (Levi-ci-Vita symbols) \(\epsilon^{ij}\) with the convention \(\epsilon^{12}=1\).
The intrinsic operator algebra (2) provides an intrinsic symplectic structure
\[[\hat{H}_{c},X]=\hbar\Sigma_{y}\hat{\cal H}X+\hbar\Sigma_{y}{\cal E}=-i\hbar\hat{ \Omega}X+\hbar\Sigma_{y}{\cal E}, \tag{46}\]
which has to be preserved for any physically equivalent transformation. We wish to find normal co-ordinates that diagonalize our system. In other words, we wish to find a similarity transformation that diagonalizes the nonsymmetric normal matrix \(\hat{\Omega}\). First we note that \(\hat{\Omega}\) has four distinct imaginary eigenvalues
\[\lambda=\left\{-i\lambda_{1},i\lambda_{1},-i\lambda_{2},i\lambda_{2}\right\}. \tag{47}\]
Where
\[\lambda_{1}=\frac{1}{\sqrt{2}}\sqrt{b+\sqrt{\Delta}},\ \lambda_{2}=\frac{1}{ \sqrt{2}}\sqrt{b-\sqrt{\Delta}}, \tag{48}\]
with
\[\Delta = b^{2}-4c,\ \ \ \ \ b=\omega_{1}^{2}+\omega_{2}^{2}+6\nu_{1}\nu_{2}, \tag{49}\] \[c = \omega_{1}^{2}\omega_{2}^{2}+16\nu_{1}^{2}\nu_{2}^{2}-4\nu_{1}^{ 2}\omega_{1}^{2}\mu_{1}/\mu_{2}-4\nu_{2}^{2}\omega_{2}^{2}\mu_{2}/\mu_{1}. \tag{50}\]
If \(u_{j},\ j=1,2\) are left eigenvectors corresponding to the eigenvalue \(-i\lambda_{j}\), then the left eigenvectors corresponding to \(i\lambda_{j}\) are given by\(u_{j}^{*}\). Right eigenvectors are related to left eigenvectors by \(v_{i}=-\Sigma_{y}u_{i}^{\dagger},\ (i=1,2).\) Accordingly, the similarity transformation \(\hat{Q}\) (as well as \(\hat{Q}^{-1}\)) can be constructed by arranging the eigen-vectors column-wise. In particular,
\[\hat{Q}=(v_{1},v_{1}^{*},v_{2},v_{2}^{*}),\ \hat{Q}^{-1}=(u_{1}^{T},(u_{1}^{*})^ {T},u_{2}^{T},(u_{2}^{*})^{T}). \tag{51}\]
One can verify that
\[\hat{Q}^{\dagger}=-\hat{\Sigma}_{z}\hat{Q}^{-1}\hat{\Sigma}_{y},\ \mbox{with}\ \hat{\Sigma}_{z}=\mbox{diag}(\sigma_{z},\sigma_{z}). \tag{52}\]
For our computation purpose, we write the explicit form of the left eigenvectors as follows.
\[u_{i}=\frac{1}{k_{i}}\left(-i\gamma_{i1},\gamma_{i2},\gamma_{i3},i\gamma_{i4} \right),\ i=1,2. \tag{53}\]
with
\[\gamma_{i1} = \lambda_{i}\mu_{1}\mu_{2}(\lambda_{i}^{2}-\omega_{2}^{2}-4\nu_{ 1}\nu_{2}), \tag{54}\] \[\gamma_{i2} = \mu_{2}(\lambda_{i}^{2}-\omega_{2}^{2})+4\mu_{1}\nu_{1}^{2},\] (55) \[\gamma_{i3} = 2\mu_{1}\mu_{2}\nu_{1}(\lambda_{i}^{2}-4\nu_{1}\nu_{2})+2\nu_{ 2}\mu_{2}^{2}\omega_{2}^{2},\] (56) \[\gamma_{i4} = 2\lambda_{i}(\mu_{1}\nu_{1}+\mu_{2}\nu_{2}). \tag{57}\]
\(k_{j}\) are the normalization constants. We get the following diagonal representation of \(\hat{\Omega}\).
\[\hat{\Omega}_{D}=\hat{Q}^{-1}\hat{\Omega}\hat{Q}=diag(-i\lambda_{1},i\lambda_{1},-i\lambda_{2},i\lambda_{2}), \tag{58}\]
One can define a normal co-ordinate
\[\hat{A}=(\hat{A}_{1},\hat{A}_{2},\hat{A}_{3},\hat{A}_{4})^{T}=(\hat{a}_{1},\hat {a}_{1}^{\dagger},\hat{a}_{2},\hat{a}_{2}^{\dagger})^{T}, \tag{59}\]
with the help of
\[X=QA,\;X^{\dagger}=\hat{A}^{\dagger}(-\hat{\Sigma}_{z}\hat{Q}^{-1}\hat{\Sigma }_{y}), \tag{60}\]
which satisfy the following algebra of annihilation (\(\hat{a}_{i}\)) and creation (\(\hat{a}_{i}^{\dagger}\)) operators.
\[[\hat{A}_{\alpha},\hat{A}_{\beta}]=i(\hat{\Sigma}_{y})_{\alpha\beta}. \tag{61}\]
Accordingly, what follows is that the TD- Schrodinger equation (TDSE) \(\hat{H}_{c}\psi=i\hbar\frac{\partial\psi}{\partial t}\) is transformed in normal co-ordinates as
\[\frac{1}{2}A^{\dagger}\Sigma A\psi+{\cal E}^{T}QA\psi=i\hbar\frac{\partial \psi}{\partial t}, \tag{62}\]
where
\[\hat{\Sigma}=i\hat{\Sigma}_{z}\hat{\Omega}_{D}=diag(\lambda_{1},\lambda_{1}, \lambda_{2},\lambda_{2}). \tag{63}\]
That means we have an equivalent TDSE
\[\hat{H}\psi=i\hbar\frac{\partial\psi}{\partial t} \tag{64}\]
with the following equivalent Hamiltonian for our system.
\[\hat{H}(t)=\sum_{j=1}^{2}\left(\hat{a}_{j}^{\dagger}\hat{a}_{j}+\frac{1}{2} \right)\lambda_{j}+{\cal E}^{T}\hat{Q}\hat{A}. \tag{65}\]
The Hamiltonian (65) can be split into
\[\hat{H}(t)=\hat{H}_{0}(t)+\hat{V}(t), \tag{66}\]
where the eigenstates of
\[\hat{H}_{0}(t)=\sum_{i=1}^{2}\lambda_{i}(t)\hat{N}_{i},\;\mbox{with}\;\hat{N} _{i}=\hat{a}_{i}^{\dagger}\hat{a}_{i}, \tag{67}\]
are known. The additional TD- part is linear in field operators. In particular,
\[\hat{V}(t)=g(t)+\sum_{i=1}^{2}\left(f_{i}(t)\hat{a}_{i}^{\dagger}+f_{i}^{*}(t) \hat{a}_{i}\right), \tag{68}\]
with
\[f_{j}(t) = \frac{1}{k_{j}}[\mathcal{E}_{2}(\gamma_{j4}+\frac{\theta}{2\hbar} \gamma_{j1})-i\mathcal{E}_{1}(\gamma_{j2}+\frac{\theta}{2\hbar}\gamma_{j3})], \;j=1,2. \tag{69}\] \[g(t) = \frac{1}{2}(\lambda_{1}+\lambda_{2}). \tag{70}\]
The real parameters \(\gamma_{jk}\) in terms of the noncommutative parameters are given in Appendix (Equation (54)- (57)).
Our first task is to find out the solutions for (66). Since \(\lambda_{j}(t)\) and \(f_{j}(t)\) are TD, we shall adopt Lewis-Riesenfeld phase-space invariant method to solve the system.
## IV Lewis-Riesenfeld invariant operator and the solution of Schrodinger equation
We shall apply the Lewis-Riesenfeld method as prescribed in [55]. Let us assume that the eigenvalue equation of the TD-operator
\[\hat{\mathcal{O}}(t)=\prod_{j=1}^{2}\hat{\mathcal{O}}_{j}(t),\;\text{with}\; \hat{\mathcal{O}}_{j}(t)=\exp\left(\mu_{j}(t)\hat{N}_{j}\right), \tag{71}\]
are known for arbitrary complex TD-parameters \(\mu_{j}(t)\) (\(j=1,2\)). However \(\hat{\mathcal{O}}(t)\) is not invariant associated with the total Hamiltonian \(\hat{H}(t)\). In particular,
\[\hat{\Theta}(t)=\partial_{t}\hat{\mathcal{O}}+(i\hbar)^{-1}[\hat{\mathcal{O}},\hat{H}]\neq 0. \tag{72}\]
We readily see that if \(|\psi(t)\rangle\) is a solution of the TDSE (64) associated with \(\hat{H}(t)\), then \(\hat{\mathcal{O}}(t)|\psi(t)\rangle\) is a solution of TDSE for the TD-Hamiltonian
\[\hat{\hat{H}}(t)=\hat{H}(t)+i\hbar\Theta(t)\hat{\mathcal{O}}^{-1}(t). \tag{73}\]
We introduce a TD-unitary transformation \(\hat{\Lambda}(t)\), which relates the solutions of both TDSE, associated with \(\hat{H}(t)\) and \(\hat{\hat{H}}(t)\), leading to
\[i\hbar\partial_{t}[\hat{\Lambda}(t)\hat{\mathcal{O}}(t)|\psi(t)\rangle]=\hat{ \hat{\mathcal{H}}}(t)[\hat{\Lambda}(t)\hat{\mathcal{O}}(t)|\psi(t)\rangle], \tag{74}\]
where
\[\hat{\hat{\mathcal{H}}}(t)=\hat{\Lambda}(t)\hat{H}(t)\hat{\Lambda}^{-1}(t)+i\hbar \hat{\Lambda}(t)\hat{\Theta}(t)\hat{\mathcal{O}}^{-1}(t)\hat{\Lambda}^{-1}(t)+i \hbar\partial_{t}[\hat{\Lambda}(t)]\hat{\Lambda}^{-1}(t). \tag{75}\]
We demand \(\hat{\hat{\mathcal{H}}}(t)=\hat{H}(t)\), which implies
\[\partial_{t}\hat{\Lambda}(t)-\frac{1}{i\hbar}[\hat{H}(t),\hat{\Lambda}(t)]=- \hat{\Lambda}(t)\hat{\Theta}(t)\hat{\mathcal{O}}^{-1}(t). \tag{76}\]
One can verify that \(\hat{\Lambda}(t)\hat{\mathcal{O}}(t)|\psi(t)\rangle\) is a solution of TDSE associated with \(\hat{H}(t)\). Using (72) and (76), we find that \(\hat{\mathcal{I}}(t)=\hat{\Lambda}(t)\hat{\mathcal{O}}(t)\) is a TD-invariant for \(\hat{H}(t)\). In other words,
\[\partial_{t}[\hat{\Lambda}(t)\hat{\mathcal{O}}(t)]+(i\hbar)^{-1}[\hat{ \Lambda}(t)\hat{\mathcal{O}}(t),\hat{H}(t)]=0. \tag{77}\]
According to Lewis-Riesenfeld (LR) theorem, if \(|n,t\rangle\) is an eigenstate of \(\hat{\mathcal{I}}(t)\), then \(|n,t\rangle e^{i\phi(t)}\) is a solution of TDSE for \(H(t)\), for some TD-phase factor \(\phi(t)\), which can be computed from consistency condition.
Accordingly, our first aim to compute \(\Theta(t)\). Then we have to take a consistent ansatz for \(\Lambda\) so that a LR-invariant operator \(\hat{\mathcal{I}}(t)\) can be constructed. The construction of eigenstates of \(\hat{\mathcal{I}}(t)\), along with phase-factor \(\phi(t)\) will lead to our desired solution.
Since \([\hat{H}_{0},\hat{\mathcal{O}}_{i}]=0\), equation (72) is reduced to
\[\hat{\Theta}(t)=\partial_{t}\hat{\mathcal{O}}+(i\hbar)^{-1}[\hat{\mathcal{O} },\hat{V}]. \tag{78}\]
Using Kubo's identity
\[[\hat{A},e^{\mu\hat{B}}]=-\int_{0}^{-\mu}e^{(\mu+u)\hat{B}}[\hat{A},\hat{B}]e ^{-u\hat{B}}du, \tag{79}\]
we have the following identities.
\[\begin{array}{ll}\left[\hat{a}_{i},\hat{\mathcal{O}}_{j}\right]=(e^{\mu_{j} }-1)\hat{\mathcal{O}}_{j}\hat{a}_{j}\delta_{ij},&\hat{\mathcal{O}}_{j}\hat{a}_ {j}\hat{\mathcal{O}}_{j}^{-1}=e^{-\mu_{j}}\hat{a}_{j},\\ \left[\hat{a}_{i}^{\dagger},\hat{\mathcal{O}}_{j}\right]=(e^{-\mu_{j}}-1)\hat {\mathcal{O}}_{j}\hat{a}_{j}^{\dagger}\delta_{ij},&\hat{\mathcal{O}}_{j}\hat{ a}_{j}^{\dagger}\hat{\mathcal{O}}_{j}^{-1}=e^{\mu_{j}}\hat{a}_{j}^{\dagger}, \end{array} \tag{80}\]
which leads to
\[\hat{\Theta}(t)\hat{\mathcal{O}}^{-1}=\sum_{j=1}^{2}\hat{\mu}_{j}(t)\hat{a}_{ j}^{\dagger}\hat{a}_{j}+(i\hbar)^{-1}\sum_{j=1}^{2}[f_{j}(e^{\mu_{j}}-1)\hat{a}_{j} ^{\dagger}+f_{j}^{*}(e^{-\mu_{j}}-1)\hat{a}_{j}]. \tag{81}\]
We now consider following ansatz for \(\hat{\Lambda}(t)\) through the displacement operators.
\[\hat{\Lambda}(t)=\prod_{j=1}^{2}\hat{\Lambda}_{j}(t),\;\text{with}\;\hat{ \Lambda}_{j}(t)=\hat{\mathcal{D}}_{j}(\beta_{j})=e^{\beta_{j}\hat{a}_{j}^{ \dagger}-\beta_{j}^{*}\hat{a}_{j}}. \tag{82}\]
The commutation relations and transformation rules for the annihilation, creation and number operators with \(\hat{\Lambda}(t)\) are the followings.
\[\left[\hat{a}_{k},\hat{\Lambda}(t)\right] = \beta_{k}\hat{\Lambda}(t),\left[\hat{a}_{k}^{\dagger},\hat{\Lambda }(t)\right]=\beta_{k}^{*}\hat{\Lambda}(t), \tag{83}\] \[\left[\hat{a}_{k}^{\dagger}\hat{a}_{k},\hat{\Lambda}(t)\right] = (\beta_{k}\hat{a}_{k}^{\dagger}+\beta_{k}^{*}\hat{a}_{k}-|\beta_{ k}|^{2})\hat{\Lambda}(t).\] (84) \[\hat{\Lambda}^{\dagger}\hat{a}_{j}\hat{\Lambda} = \hat{a}_{j}+\beta_{j},\;\hat{\Lambda}^{\dagger}\hat{a}_{j}^{ \dagger}\hat{\Lambda}=\hat{a}_{j}^{\dagger}+\beta_{j}^{*},j=1,2. \tag{85}\]
Using (83), (83) and (84) we get
\[[\hat{H}(t),\hat{\Lambda}(t)]=\sum_{k=1}^{2}[\omega_{k}(\beta_{k}\hat{a}_{k}^{ \dagger}+\beta_{k}^{*}\hat{a}_{k})+(f_{k}\beta_{k}^{*}+f_{k}^{*}\beta_{k})- \omega_{k}|\beta_{k}|^{2}]\hat{\Lambda}. \tag{86}\]
Moreover
\[\partial_{t}\hat{\Lambda}=\sum_{k=1}^{2}[\hat{\beta}_{k}\hat{a}_{k}^{\dagger} -\dot{\beta}_{k}^{*}\hat{a}_{k}+\beta_{k}\dot{\beta}_{k}^{*}-\frac{1}{2} \partial_{t}|\beta_{k}|^{2}]\hat{\Lambda}. \tag{87}\]
And
\[\hat{\Lambda}\hat{\Theta}\hat{\Theta}^{-1} = \sum_{k=1}^{2}[\dot{\mu}_{k}(\hat{a}_{k}^{\dagger}\hat{a}_{k}- \beta_{k}\hat{a}_{k}^{\dagger}-\beta_{k}^{*}\hat{a}_{k}+|\beta_{k}|^{2})+(i \hbar)^{-1}\{f_{k}(e^{\mu_{k}}-1)(\hat{a}_{k}^{\dagger}-\beta_{k}^{*}) \tag{88}\] \[+f_{k}^{*}(e^{-\mu_{k}}-1)(\hat{a}_{k}-\beta_{k})\}]\hat{\Lambda}.\]
Equation (76) provides consistency conditions through the following first order differential equations of the displacement parameters \(\beta_{k}(t)\) and the unknowns \(\mu_{k}(t)\).
\[i\hbar\dot{\beta}_{j} = \lambda_{j}\beta_{j}+i\hbar\dot{\mu}_{j}\beta_{j}-f_{j}(e^{\mu_{ j}}-1),\;j=1,2. \tag{89}\] \[-i\hbar\dot{\beta}_{j}^{*} = \lambda_{j}\beta_{j}^{*}+i\hbar\dot{\mu}_{j}\beta_{j}^{*}-f_{j}^{ *}(e^{-\mu_{j}}-1),\;j=1,2.\] (90) \[\dot{\mu}_{1} = \dot{\mu}_{2}=0, \tag{91}\]
along with the constraint
\[i\hbar\sum_{k=1}^{2}(\beta_{k}\dot{\beta}_{k}^{*}-\frac{1}{2} \partial_{t}|\beta_{k}|^{2}) = \sum_{k=1}^{2}(f_{k}\beta_{k}^{*}+f_{k}^{*}\beta_{k}-\lambda_{k}| \beta_{k}|^{2}+(-1)^{k}i\hbar\dot{\mu}_{k}|\beta_{k}|^{2} \tag{92}\] \[+f_{k}(e^{\mu_{k}}-1)\beta_{k}^{*}+f_{k}^{*}(e^{-\mu_{k}}-1)\beta _{k}).\]
From (91) we have \(\mu_{1}\) and \(\mu_{2}\) are constants. On the other hand, the comparison of (89) with complex conjugation of (90) leads to
\[\mu_{1}=i\mu_{10},\;\mu_{2}=i\mu_{20},\;\mbox{with}\;\mu_{10},\mu_{20}\in \mathbb{R}. \tag{93}\]
The set of independent equations are then
\[i\hbar\dot{\beta}_{j}=\lambda_{j}\beta_{j}-f_{j}(e^{i\mu_{j0}}-1),\;j=1,2. \tag{94}\]
Using (93) in the constraint equation (92) we have
\[\sum_{j=1}^{2}[\lambda_{j}|\beta_{j}|^{2}+2\Re(f_{j}\beta_{j}^{*} )+\hbar\Im(\dot{\beta}_{j}\beta_{j}^{*})]=0. \tag{95}\] \[\mu_{j0}=(2n_{j}+1)\pi;\;n_{j}=0,\pm 1,\pm 2,\pm 3....... \tag{96}\]
The notation \(\Re(z)\) and \(\Im(z)\) stands for the real and imaginary part of \(z\), respectively. Given time-dependent parameters \((\lambda_{j}(t),f_{j}(t),\;j=1,2)\) one can solve (94). In particular,
\[\beta_{j}(t)=\beta_{j0}e^{-i\Omega_{j}/\hbar}-\frac{2i}{\hbar}e^{-i\Omega_{j}/ \hbar}\int^{t}f_{j}(\tau)e^{i\Omega_{j}(\tau)/\hbar}d\tau,\;j=1,2. \tag{97}\]
Where \(\beta_{j0}\) are integration constants, and
\[\Omega_{j}(t)=\int^{t}\lambda_{j}(\tau)d\tau. \tag{98}\]
For each choice of a pair of integers \((l_{1},l_{2})\) in (96), we have a Lewis Riesenfeld invariant operator, which reads
\[\hat{I}_{l_{1},l_{2}}(t)=\hat{\Lambda}(t)\hat{\mathcal{O}}=\hat{\mathcal{D}}( \beta_{1})\hat{\mathcal{D}}(\beta_{2})e^{(2l_{1}+1)i\pi\hat{N}_{1}+(2l_{2}+1)i \pi\hat{N}_{2}}. \tag{99}\]
One can readily identify that the eigenstates \(\{|n_{1},n_{2};\beta_{1},\beta_{2}\rangle\}\) of the invariant operator \(\hat{I}_{l_{1},l_{2}}\) are the displaced number states
\[|n_{1},n_{2};\beta_{1},\beta_{2}\rangle=e^{(2l_{1}+1)i\pi n_{1}+(2l_{2}+1)i \pi n_{2}}\hat{\mathcal{D}}(\beta_{1})\hat{\mathcal{D}}(\beta_{2})|n_{1},n_{ 2}\rangle. \tag{100}\]
### Phase factor
Time-dependent phase-factors \(\phi_{n_{1},n_{2}}(t)\), associated with the solutions \((|\psi(t)\rangle=\sum_{n_{1},n_{2}}c_{n_{1},n_{2}}|n_{1},n_{2};\beta_{1}, \beta_{2}\rangle e^{i\phi_{n_{1},n_{2}}(t)})\) of TDSE are given by
\[\phi_{n_{1},n_{2}}(t)=\frac{1}{\hbar}\int_{0}^{t}\langle n_{1},n_{2}|\hat{ \Lambda}^{\dagger}(i\hbar\frac{\partial}{\partial t}-\hat{H}(\tau))\hat{ \Lambda}|n_{1},n_{2}\rangle d\tau. \tag{101}\]
We can split phase-factors into a geometric part (\(\phi_{n_{1},n_{2}}^{(g)}\)) and a dynamic part (\(\phi_{n_{1},n_{2}}^{(d)}\)) as
\[\phi_{n_{1},n_{2}}^{(g)}(t)=\frac{1}{\hbar}\int_{0}^{t}\langle n _{1},n_{2}|\hat{\Lambda}^{\dagger}i\hbar\frac{\partial}{\partial t}\hat{ \Lambda}|n_{1},n_{2}\rangle d\tau, \tag{102}\] \[\phi_{n_{1},n_{2}}^{(d)}(t)=-\frac{1}{\hbar}\int_{0}^{t}\langle n _{1},n_{2}|\hat{\Lambda}^{\dagger}\hat{H}\hat{\Lambda}|n_{1},n_{2}\rangle d\tau. \tag{103}\]
Using (87) along with the unitarity of \(\hat{\Lambda}\) we can simplify (102). Moreover, the identities (85) reduce \(\phi^{(g)}_{n_{1},n_{2}}(t)\) to
\[\phi^{(g)}_{n_{1},n_{2}}(t)=\frac{i}{2}\sum_{k=1}^{2}\int_{0}^{t}\left(\frac{ \partial\beta_{k}}{\partial\tau}\beta_{k}^{*}-\beta_{k}\frac{\partial\beta_{k} ^{*}}{\partial\tau}\right)d\tau=-\sum_{k=1}^{2}\int_{0}^{t}\Im\left(\frac{ \partial\beta_{k}}{\partial\tau}\beta_{k}^{*}\right)d\tau, \tag{104}\]
where we have used the orthonormality conditions (\(\langle n_{i},n_{k}|n_{j},n_{l}\rangle=\delta_{ij}\delta_{kl}\)) of number states. For dynamical phase, we first note the following transformation rule of \(\hat{H}\) under \(\hat{\Lambda}\).
\[\hat{\Lambda}^{\dagger}\hat{H}\hat{\Lambda}=\hat{H}+\sum_{j=1}^{2}\lambda_{j} (\beta_{j}\hat{a}_{j}^{\dagger}+\beta_{j}^{*}\hat{a}_{j})+(\lambda_{j}|\beta_ {j}|^{2}+f_{j}\beta_{j}^{*}+f_{j}^{*}\beta_{j}). \tag{105}\]
(105) simplifies (103), which enables us to write
\[\phi^{(d)}_{n_{1},n_{2}}(t)=-\frac{1}{\hbar}\int_{0}^{t}\left[g(\tau)+\sum_{ k=1}^{2}(\lambda_{k}n_{k}+\lambda_{k}|\beta_{k}|^{2}+f_{k}\beta_{k}^{*}+f_{k}^{* }\beta_{k})\right]d\tau. \tag{106}\]
From (104) and (106), we see that time dependent phase factors
\[\phi_{n_{1},n_{2}}(t)=\phi^{(g)}_{n_{1},n_{2}}(t)+\phi^{(d)}_{n_{1},n_{2}}(t) \tag{107}\]
are real functions of time. Thus we have a set of solutions for the concerned TDSE. Now with these solutions, one can construct the covariance matrix and study the separability criterion, which is discussed in next section.
## V Separability criterion for the system
Since the connection between the NC-space co-ordinates and usual commutative space co-ordinate are given by
\[\hat{\hat{X}}_{k}=\sum_{j=1}^{4}\upsilon_{kj}\hat{X}_{j}, \tag{108}\]
one can write
\[\langle\hat{\hat{X}}_{k}\rangle=\sum_{j=1}^{4}\upsilon_{kj}\left(\sum_{l=1}^{ 4}Q_{jl}\langle\hat{A}_{l}\rangle\right), \tag{109}\]
where \(\upsilon_{kj}\) is the \(kj^{th}\)-element of \(\Upsilon_{D}\) as specified in (9), and \(Q_{\alpha\beta}\) is the \(\alpha\beta^{th}\) element of the similarity transformation matrix \(\hat{Q}\) as specified in (51). Thus the expectation value of \(\hat{A}_{l}\) over the state \(|n_{1},n_{2};\beta_{1},\beta_{2}\rangle e^{i\phi_{n_{1},n_{2}}}=\hat{\Lambda}| n_{1},n_{2}\rangle e^{i\phi_{n_{1},n_{2}}}\) can be written as
\[\langle\hat{A}_{l}\rangle = \langle n_{1},n_{2}|\hat{\Lambda}^{\dagger}\hat{A}_{l}\hat{ \Lambda}|n_{1},n_{2}\rangle;\ \ \because\phi_{n_{1},n_{2}}(t)\in\mathbb{R}. \tag{110}\] \[= \langle n_{1},n_{2}|(\hat{A}_{l}+\tilde{\beta}_{l})|n_{1},n_{2}\rangle,\]
where,
\[\tilde{\beta}=(\tilde{\beta}_{1},\tilde{\beta}_{2},\tilde{\beta}_{3},\tilde{ \beta}_{4})=(\beta_{1},\beta_{1}^{*},\beta_{2},\beta_{2}^{*}). \tag{111}\]
Since \(\hat{A}_{l}\) are just annihilation or creation operators, and the number states are orthonormalized, we have \(\langle n_{1},n_{2}|\hat{A}_{l}|n_{1},n_{2}\rangle=0\). Therefore
\[\langle\hat{\tilde{X}}_{\alpha}\rangle=\sum_{j=1}^{4}\upsilon_{\alpha j} \left(\sum_{l=1}^{4}Q_{jl}\tilde{\beta}_{l}\right). \tag{112}\]
Similarly,
\[\langle\{\hat{\tilde{X}}_{\alpha},\hat{\tilde{X}}_{\beta}\}\rangle=2\sum_{j=1} ^{4}\sum_{l=1}^{4}\upsilon_{\alpha j}\upsilon_{\beta l}\Re(Q_{jl}^{(13)})+2 \langle\hat{\tilde{X}}_{\alpha}\rangle\langle\hat{\tilde{X}}_{\beta}\rangle, \tag{113}\]
where
\[Q_{jl}^{(13)}=\bar{Q}_{lj}^{(13)}=n_{1}\bar{Q}_{j1}Q_{l1}+(1+n_{1})Q_{j1}\bar {Q}_{l1}+n_{2}\bar{Q}_{j3}Q_{l3}+(1+n_{2})Q_{j3}\bar{Q}_{l3}. \tag{114}\]
Here the notation \(\bar{z}\) denotes the complex conjugate of the complex number \(z\).
Using (112) and (113) in (12), we see that the covariance matrix elements are reduced to
\[\tilde{\mathcal{V}}_{\alpha\beta}=\sum_{j=1}^{4}\sum_{l=1}^{4}\upsilon_{ \alpha j}\upsilon_{\beta l}\Re(Q_{jl}^{(13)}), \tag{115}\]
which are independent of the displacement parameters \(\beta_{j}\). However, \(\tilde{\mathcal{V}}_{\alpha\beta}\) depends on the matrix elements of \(\hat{Q}\). In other words, the covariance matrix depends on the noncommutative parameters, as well as the time-dependent parameters \(m_{j},\tilde{\omega}_{j}\) corresponding to the system. On the other hand, we see that the contributions of the external influence \(\tilde{\mathcal{E}}\) are manifested only through \(\tilde{\beta}\), which has no effect in the covariance matrix. Therefore the entanglement property (if any) between the NC space (NCS) coordinate degrees of freedom (DOF) arises only due to the NC-space parameters \(\upsilon_{ij}\). Moreover, an arbitrary linear displacement does not affect the separability criterion for the system.
The separability criterion will provide a restriction on the parameter values, for which the NCS DOF will be separable. For completeness, what follows is an inequality that has to be satisfied by the parameters of the separability of NCS DOF.
Using the explicit forms of the elements of \(\hat{Q}\), we get the following forms of \(Q_{jl}^{(13)}\).
\[Q_{11}^{(13)} = (2n_{1}+1)\gamma_{12}^{2}+(2n_{2}+1)\gamma_{22}^{2}=q_{11}, \tag{116}\] \[Q_{12}^{(13)} = i(\gamma_{11}\gamma_{12}+\gamma_{21}\gamma_{22})=\bar{Q}_{21}^{( 13)}=iq_{12},\] (117) \[Q_{13}^{(13)} = i(\gamma_{12}\gamma_{14}+\gamma_{22}\gamma_{24})=\bar{Q}_{31}^{( 13)}=iq_{13},\] (118) \[Q_{14}^{(13)} = -(2n_{1}+1)\gamma_{12}\gamma_{13}-(2n_{2}+1)\gamma_{22}\gamma_{23 }=\bar{Q}_{41}^{(13)}=q_{14},\] (119) \[Q_{22}^{(13)} = (2n_{1}+1)\gamma_{11}^{2}+(2n_{2}+1)\gamma_{21}^{2}=q_{22},\] (120) \[Q_{23}^{(13)} = (2n_{1}+1)\gamma_{11}\gamma_{14}+(2n_{2}+1)\gamma_{21}\gamma_{24 }=\bar{Q}_{32}^{(13)}=q_{23},\] (121) \[Q_{24}^{(13)} = i(\gamma_{11}\gamma_{13}+\gamma_{21}\gamma_{23})=\bar{Q}_{42}^{( 13)}=iq_{24},\] (122) \[Q_{33}^{(13)} = (2n_{1}+1)\gamma_{14}^{2}+(2n_{2}+1)\gamma_{24}^{2}=q_{33},\] (123) \[Q_{34}^{(13)} = i(\gamma_{13}\gamma_{14}+\gamma_{23}\gamma_{24})=\bar{Q}_{43}^{( 13)}=iq_{34},\] (124) \[Q_{44}^{(13)} = (2n_{1}+1)\gamma_{13}^{2}+(2n_{2}+1)\gamma_{23}^{2}=q_{44}. \tag{125}\]
Here we have defined \(q_{\alpha\beta}\in\mathbb{R}\) for convenience. The covariance matrix elements in NC-space now reads
\[\tilde{\mathcal{V}}_{11} = q_{11}-\frac{\theta}{\hbar}q_{14}+\frac{\theta^{2}}{4\hbar^{2}}q _{44},\ \tilde{\mathcal{V}}_{12}=0,\ \tilde{\mathcal{V}}_{13}=0,\ \tilde{\mathcal{V}}_{14}=\frac{\hbar_{e}}{\hbar}q_{14}-\frac{\eta}{2\hbar}q_{1 1}-\frac{\theta}{2\hbar}q_{44},\] \[\tilde{\mathcal{V}}_{21} = 0,\ \tilde{\mathcal{V}}_{22}=q_{22}+\frac{\eta}{\hbar}q_{23}+\frac{ \eta^{2}}{4\hbar^{2}}q_{33},\ \tilde{\mathcal{V}}_{23}=\frac{\hbar_{e}}{\hbar}q_{23}+\frac{\theta}{2\hbar}q_{2 2}+\frac{\eta}{2\hbar}q_{33},\ \tilde{\mathcal{V}}_{24}=0,\] \[\tilde{\mathcal{V}}_{31} = 0,\ \tilde{\mathcal{V}}_{32}=\tilde{\mathcal{V}}_{23},\ \tilde{\mathcal{V}}_{33}=q_{33}+\frac{\theta}{\hbar}q_{23}+\frac{\theta^{2}}{4 \hbar^{2}}q_{22},\ \tilde{\mathcal{V}}_{34}=0,\] \[\tilde{\mathcal{V}}_{41} = \tilde{\mathcal{V}}_{14},\ \tilde{\mathcal{V}}_{42}=0,\ \tilde{\mathcal{V}}_{43}=0,\ \tilde{\mathcal{V}}_{44}=q_{44}-\frac{\eta}{\hbar}q_{14}+\frac{\eta^{2}}{4 \hbar^{2}}q_{11}. \tag{126}\]
Using the expressions (126) in (36), (37) and (38), surprisingly we obtain the following simple form for \(\hat{V}_{ij}\).
\[\hat{V}_{11}=\left(\begin{array}{cc}q_{11}&0\\ 0&q_{22}\end{array}\right),\ \hat{V}_{22}=\left(\begin{array}{cc}q_{33}&0\\ 0&q_{44}\end{array}\right),\ \hat{V}_{12}=\left(\begin{array}{cc}0&q_{14}\\ q_{23}&0\end{array}\right). \tag{127}\]
One can now use (127) in (28) to study the separability of the NC-space coordinate degrees of freedom. Since the generalized Peres-Horodecki criterion (Simon's condition) is a sufficient condition for Gaussian states, from now on we shall be using \(n_{1}=n_{2}=0\). For convenience, we shall stick to the natural unit system, for which \(\hbar=1\).
Interestingly, one can see that \(Ps\) has the commutative space (\(\theta\to 0,\eta\to 0\)) limit
\[\lim_{\theta\to 0,\eta\to 0,\hbar\to 1}Ps=\frac{1}{16}-\frac{1}{4}m_{1}^{2}m_{2}^{4} \omega_{1}^{2}(\omega_{2}^{2}-\omega_{1}^{2})^{4}, \tag{128}\]
which is always positive (thus separable) for an isotropic oscillator. However, for an anisotropic oscillator, it might be negative for specific values of the parameters. For instance, for \(m_{2}=\omega_{2}=1\) and \(m_{1}\omega_{1}=1\), separable states are obtained for
\[1-\frac{1}{\sqrt{2}}\leq\omega_{1}^{2}\leq 1+\frac{1}{\sqrt{2}}. \tag{129}\]
Thus, we see that the entanglement appears due to the anisotropy in the oscillator. The required anisotropy might appear from the noncommutativity of space as well. In the next section, we have outlined a toy model to get a more physical glimpse.
## VI Toy model : only time-dependent NC-parameter \(\theta(t)\)
Let us consider that the time-dependency appears only through the NC-space parameter \(\theta(t)\) (co-ordinate noncommutativity). Moreover, let us assume that no momentum-momentum noncommutativity is present, i.e, \(\eta\to 0\). For quantitative discussion, we set the following parameter values.
\[\hbar\to 1,\;m_{1}=1,m_{2}=4,\tilde{\omega}_{1}=\tilde{\omega}_{2}=1. \tag{130}\]
For the present study, let us assume that the TD-parameter \(\theta(t)\) is monotonically decreasing with time, such that the spatial noncommutativity goes off after a sufficiently large time. In other words, we assume that the dynamics of NC-space is such that it will become a usual commutative space at \(t\rightarrow\infty\). For convenience, we choose
\[\theta(t)=1/\sqrt{1+t}. \tag{131}\]
Using the parameters (130) and (131), we get the following explicit form of \(Ps\) (as mentioned in (28)) for the state \(|0,0;\beta_{1},\beta_{2}\rangle\).
\[Ps=\frac{1}{16(2+t)^{12}}[t^{12}+3\times 2^{3}t^{11}-2^{3}(14431 +2^{11}t_{7})t^{10}-2^{5}(78689+10264t_{7})t^{9}\] \[+2^{4}(204010327+51198584t_{7})t^{8}+2^{5}(1267439685+272489392t_{7} )t^{7}\] \[+2^{5}(6717167061+1248828188t_{7})t^{6}+2^{8}(2486103583+402820658t _{7})t^{5}\] \[+2^{7}(9004286997+1281090628t_{7})t^{4}+2^{9}(2555083273+321598952 t_{7})t^{3}\] \[+2^{9}(1776825135+199309604t_{7})t^{2}+2^{13}(43306567+4360181t_{7} )t\] \[+2^{11}(29038819+2642044t_{7})],\;\mbox{with}\;t_{7}=\frac{\sqrt {7(15+8t)}}{1+t}. \tag{132}\]
If \(Ps\geq 0\), then the NC-space co-ordinate degrees of freedoms are separable, otherwise they are entangled. One can verify that
\[\lim_{t\rightarrow\infty}Ps=1/16, \tag{133}\]
which confirms that NC-space is gradually transformed into usual commutative space through the dynamics of space.
Since the maximum value of \(Ps\sim 10^{6}\), whereas the minimum value of \(Ps\sim-10^{-3}\), it is difficult to visualize the nature of \(Ps\) in a single plot. We have divided the three regions for \(Ps\) in which it is positive or negative and show the transition between positive and negative values. FIG. 1 and FIG. 2 represent the separable domain, whereas FIG. 3 represents entangled domain. Interestingly, there is a two-way transition- after the time at around \(t\to 213.001\) separable states become entangled, whence it becomes separable again approximately after \(t\to 275.331\).
Hamiltonian with bilinear and linear field modes interactions in commutative space. The exact form of solutions of the TD system enabled us to construct the exact form of the required covariance matrix. It turns out that the covariance matrix looks the same as that of the ordinary oscillator in commutative space, with the only difference being in the matrix elements, which are dependent on NCS parameters.
It turns out that the existence of the anisotropy in the oscillator determines the separability of the states. In particular, for an isotropic oscillator, the bipartite states are always separable, whereas particular anisotropic parameter values may cease the separability. This observation opens up the possibility of engineering the entanglement by tuning the parameter of the system. In particular, since oscillators in NCS are structurally similar to that of the dynamics of a charged particle inside a magnetic field (e.g., Maxwell Chern-Simons model in long wavelength limit in 2 spatial dimension), the time evolution of NCS parameters is equivalent to a time-dependent external magnetic field, which can be controlled in a laboratory environment, with present-day experimental capacity. For a simple illustration of the experimental possibility, we have outlined a simple toy model by considering only spatial noncommutativity and anisotropy in effective mass. The transition point of the switching between the separability and entanglement are illustrated graphically (FIG. 3).
## VIII Author declarations
The authors have no conflicts to disclose.
## IX Availability of data
Data sharing is not applicable to this article as no new data were created or analyzed in this study.
## Appendix A Eigenvalues of \(\hat{\Omega}\)
The characteristic polynomial (\(p(\lambda)=\det(\lambda\hat{\mathbb{I}}-\hat{\Omega})\),) of \(\hat{\Omega}\) is given by
\[p(\lambda)=\lambda^{4}+b\lambda^{2}+c. \tag{134}\]
Where
\[c=\omega_{x}^{2}\omega_{y}^{2},\ \ b=\alpha_{0}\omega_{x}^{2}+\frac{1}{\alpha_{0}} \omega_{y}^{2}+4\nu_{1}\nu_{2}(\sqrt{\alpha_{0}}+\frac{1}{\sqrt{\alpha_{0}}})^{ 2}, \tag{135}\]
with
\[\omega_{x}^{2}=4\nu_{1}\nu_{2}(\frac{\mu_{1}\omega_{1}^{2}}{4\mu_{2}\nu_{2}^{2} }-1),\ \omega_{y}^{2}=4\nu_{1}\nu_{2}(\frac{\mu_{2}\omega_{2}^{2}}{4\mu_{1}\nu_{1}^{2} }-1),\ \alpha_{0}=\frac{\mu_{2}\nu_{2}}{\mu_{1}\nu_{1}}. \tag{136}\]
Using the explicit forms of \(\mu_{i},\nu_{i},\omega_{i},\ i=1,2\), one can show that \(c\geq 0,\ b\geq 0\) as follows.
First we observe that \(b\) and \(c\) can be written as
\[b=\omega_{1}^{2}+\omega_{2}^{2}+6\nu_{1}\nu_{2},\ c=\omega_{1}^{2}\omega_{2}^ {2}+16\nu_{1}^{2}\nu_{2}^{2}-4\nu_{1}^{2}\omega_{1}^{2}\mu_{1}/\mu_{2}-4\nu_{2 }^{2}\omega_{2}^{2}\mu_{2}/\mu_{1}. \tag{137}\]
\(b>0\) for nonzero positive values of \(\omega_{1},\omega_{2},\nu_{1},\nu_{2}\). On the other hand, \(c\) can be factorized as
\[c=16\nu_{1}^{2}\nu_{2}^{2}\left(\frac{\mu_{1}\omega_{1}^{2}}{4\mu_{2}\nu_{2}^{ 2}}-1\right)\left(\frac{\mu_{2}\omega_{2}^{2}}{4\mu_{1}\nu_{1}^{2}}-1\right). \tag{138}\]
If we use the explicit forms of \(\omega_{1},\omega_{2},\nu_{1},\nu_{2}\), we have
\[\frac{\mu_{1}\omega_{1}^{2}}{4\mu_{2}\nu_{2}^{2}} = \frac{4m_{2}^{2}\hbar^{2}}{(\eta+m_{1}m_{2}\theta\tilde{\omega}_ {1}^{2})^{2}}\left(m_{1}\tilde{\omega}_{1}^{2}+\frac{\eta^{2}}{4\hbar^{2}m_{2} }\right)\left(\frac{1}{m_{2}}+\frac{1}{4\hbar^{2}}m_{1}\tilde{\omega}_{1}^{2} \theta^{2}\right) \tag{139}\] \[= \left(1+\frac{4\hbar^{2}}{\eta^{2}}m_{1}m_{2}\tilde{\omega}_{1}^{ 2}\right)\left(1+\frac{\theta^{2}}{4\hbar^{2}}m_{1}m_{2}\tilde{\omega}_{1}^{2} \right)\left(1+\frac{\theta}{\eta}m_{1}m_{2}\tilde{\omega}_{1}^{2}\right)^{-2}\] \[= \frac{1}{\left(1+\frac{\theta}{\eta}m_{1}m_{2}\tilde{\omega}_{1}^ {2}\right)^{2}}\left(\left(1+\frac{\theta}{\eta}m_{1}m_{2}\tilde{\omega}_{1}^ {2}\right)^{2}+\frac{2\theta}{\eta}m_{1}m_{2}\tilde{\omega}_{1}^{2}(\frac{2 \hbar^{2}}{\eta\theta}+\frac{\eta\theta}{8\hbar^{2}}-1)\right)\] \[= 1+\frac{2\theta m_{1}m_{2}\tilde{\omega}_{1}^{2}}{\eta\left(1+ \frac{\theta}{\eta}m_{1}m_{2}\tilde{\omega}_{1}^{2}\right)^{2}}\left(\frac{ \sqrt{2}\hbar}{\sqrt{\eta\theta}}-\frac{\sqrt{\eta\theta}}{2\sqrt{2}\hbar} \right)^{2}\geq 1.\]
Similarly, we see that
\[\frac{\mu_{2}\omega_{2}^{2}}{4\mu_{1}\nu_{1}^{2}}=1+\frac{2\theta m_{1}m_{2} \tilde{\omega}_{2}^{2}}{\eta\left(1+\frac{\theta}{\eta}m_{1}m_{2}\tilde{\omega }_{2}^{2}\right)^{2}}\left(\frac{\sqrt{2}\hbar}{\sqrt{\eta\theta}}-\frac{ \sqrt{\eta\theta}}{2\sqrt{2}\hbar}\right)^{2}\geq 1. \tag{140}\]
Therefore, one can conclude \(c\geq 0\).
Moreover, the discriminant of the characteristic polynomial
\[\Delta=\left(\alpha_{0}\omega_{x}^{2}-\frac{1}{\alpha_{0}}\omega_{y}^{2} \right)^{2}+8\nu_{1}\nu_{2}(1+\alpha_{0})^{2}(\omega_{x}^{2}+\omega_{y}^{2})+ \frac{4\nu_{1}^{2}\nu_{2}^{2}}{\alpha_{0}^{2}}(1+\alpha_{0})^{4}\geq 0. \tag{141}\]
One can easily verified that
\[\lambda^{2}=\frac{1}{2}(-b\pm\sqrt{\Delta})\leq 0. \tag{142}\]
From (142), we can conclude that \(\hat{\Omega}\) has four distinct imaginary eigen-values
\[\lambda=\left\{-i\lambda_{1},i\lambda_{1},-i\lambda_{2},i\lambda_{2}\right\}. \tag{143}\]
Where
\[\lambda_{1}=\frac{1}{\sqrt{2}}\sqrt{b+\sqrt{\Delta}},\;\lambda_{2}=\frac{1}{ \sqrt{2}}\sqrt{b-\sqrt{\Delta}}. \tag{144}\]
| We are studying the separability of the noncommutative (NC) space coordinate degrees of freedom with the generalized Peres-Horodecki separability criterion (Simon's condition) for a bipartite Gaussian state. The non-symplectic nature of the transformation between the usual commutative space and NC space restricts the use of Simon's condition in NC space. To enable the use of the separability criterion in NC space, we transform the NC system to an equivalent Hamiltonian in commutative space via a Bopp shift. For a general study, we consider a bilinear Hamiltonian with time-dependent (TD) parameters, along with a TD external interaction, which is linear in field modes. The system is transformed into canonical form while preserving the intrinsic symplectic structure ($Sp(4,\mathbb{R})$). The solution of the time-dependent Schrödinger equation is obtained using the Lewis-Riesenfeld invariant method (LRIM). Expectation values of the observables (thus the covariance matrix) are constructed from the states |
2301.07846 | ClusterLog: Clustering Logs for Effective Log-based Anomaly Detection | With the increasing prevalence of scalable file systems in the context of
High Performance Computing (HPC), the importance of accurate anomaly detection
on runtime logs is increasing. But as it currently stands, many
state-of-the-art methods for log-based anomaly detection, such as DeepLog, have
encountered numerous challenges when applied to logs from many parallel file
systems (PFSes), often due to their irregularity and ambiguity in time-based
log sequences. To circumvent these problems, this study proposes ClusterLog, a
log pre-processing method that clusters the temporal sequence of log keys based
on their semantic similarity. By grouping semantically and sentimentally
similar logs, this approach aims to represent log sequences with the smallest
amount of unique log keys, intending to improve the ability of a downstream
sequence-based model to effectively learn the log patterns. The preliminary
results of ClusterLog indicate not only its effectiveness in reducing the
granularity of log sequences without the loss of important sequence information
but also its generalizability to different file systems' logs. | Chris Egersdoerfer, Dong Dai, Di Zhang | 2023-01-19T01:54:48 | http://arxiv.org/abs/2301.07846v1 | # ClusterLog: Clustering Logs for Effective Log-based Anomaly Detection
###### Abstract
With the increasing prevalence of scalable file systems in the context of High Performance Computing (HPC), the importance of accurate anomaly detection on runtime logs is increasing. But as it currently stands, many state-of-the-art methods for log-based anomaly detection, such as DeepLog, have encountered numerous challenges when applied to logs from many parallel file systems (PF-Ses), often due to their irregularity and ambiguity in time-based log sequences. To circumvent these problems, this study proposes ClusterLog, a log pre-processing method that clusters the temporal sequence of log keys based on their semantic similarity. By grouping semantically and sentimentally similar logs, this approach aims to represent log sequences with the smallest amount of unique log keys, intending to improve the ability for a downstream sequence-based model to effectively learn the log patterns. The preliminary results of ClusterLog indicate not only its effectiveness in reducing the granularity of log sequences without the loss of important sequence information but also its generalizability to different file systems' logs.
## I Introduction
In light of growing datasets, increasing problem complexity, and more compute intensive algorithms, it is no question that large-scale computing systems, such as Cloud or High-Performance Computing (HPC), are a growing area of interest. In these systems, distributed storage frameworks are the critical foundation to providing global data access, and their health is therefore critical to the function of the entire system. However, due to increasing scale and complexity, distributed storage systems are subject to various bugs, failures, and anomalies in production, which lead to data loss, service outages and degradation of quality of service [15, 9, 6]. It is thereby critical to detect malfunctions accurately and in a timely manner, so that system operators can promptly pinpoint issues and resolve them immediately to mitigate losses.
It has been proven that runtime logs, which record detailed runtime information about storage systems during operation, are a valuable information source to detect potential anomalies in distributed storage systems. These runtime logs are generated by log statements written in the source code (using simple printf function or logging libraries such as Log4j [4]). These logs record important internal states of distributed storage systems, such as key variable values, return values, and performance statistics, all of which can be useful to reveal system anomalies. As a result, an extensive amount of research on log-based anomaly detection has been done recently [8, 26, 16, 25, 34, 10, 7, 19, 12, 22, 32, 21, 20, 9].
The main theme of modern Log-based anomaly detection solutions is to apply machine learning, especially deep learning methods, onto log sequences to detect anomalies. The common process of doing so is for logs to be parsed and processed first, then their normal sequence to be learned by sequence-based pattern detection models such as LSTM [18]. Having learned what normal and abnormal log sequences look like, these algorithms are then shown runtime logs and are tasked to classify them accurately. Take one of the most representative works, DeepLog [12], as an example: during training, it first parses the runtime logs into templates and represents each of these templates using a single integer. Through this process, a sequence of logs becomes a sequence of integral identifiers, which
will be learned using an LSTM model. During runtime, the anomalies are defined by whether or not the actual next log identifier is within the set of identifiers which the LSTM model predicts. In another recent study, NeuraLog [18], the runtime logs are identified as a vector instead of an integer to contain more semantic information. Still, these sequences of embedded vectors will are fed into a DL model to learn the normal/abnormal sequences.
Although these sequence-based log anomaly detection solutions work well for many storage systems such as HDFS [2], they have one key issue: they rely heavily on the quality of the log sequences in the training data. The sequence of logs must be both accurate and representative of the log system's logic. Such sequences of logs are often expensive to obtain in the real-world. For instance, the HDFS logs used in existing studies were pre-processed by aggregating logs with the same data block ID into a sequence, regardless of how far these log entries are from each other in the runtime. Unfortunately, such pre-processing is not always possible. In fact, many parallel storage systems, such as Lustre [5] and BeeGFS [3], do not have any common identifier (ID) in log entries to denote their relevance. Missing such global IDs makes it difficult to identify the matching events or to build log sequences accurately, resulting in only raw time-based log sequences available [35].
In addition, the raw log sequences generated from a distributed environment are quite ambiguous. One source of ambiguity is the _clock skew_ in distributed systems, as logs generated concurrently across multiple nodes are merged into a single file where their order is not always equivalent to the order of time in which they occurred. Secondly, interleaved concurrent threads present a further, and more complex problem. As different nodes run separate execution threads concurrently, the unrelated processes are often logged in random, interleaving order. Directly learning from these often noisy sequences of run-time logs, can be problematic, and require a much larger labeled dataset and longer training time.
To address these issues, in this study we propose ClusterLog, a log pre-processing method which clusters individual runtime logs based on their similarity to effectively reduce the ambiguity of log sequences and improve the accuracy of downstream sequence-based learning. The intuition behind ClusterLog is driven by the idea that grouping similar runtime logs together will result in less random variation within the log sequence due to a lesser amount of unique key identifiers. In addition, grouping logs based on their similarity can still retain the vital sequence information between different types of high-level file system operations. For example, Lustre log sequences contain many sequences of logs where actions are all very similar, but because of the lack of block ID, they are highly irregular in time sequence. Grouping some of these similar actions is intended to eliminate a large portion of this irregularity, providing a cleaner sequence to be learned. Further, the robust and generalizable nature of this approach allows it to be applied to numerous types of file system logs and on limited amounts of available training data, both of which are not adequately captured by previous approaches.
The rest of this paper is organized as follows. In Section II, we present the design and implementation of ClusterLog. In Section III, we discuss the evaluation setup and results. Finally, in sections IV and V, we discuss related work and lay out our future work, respectively.
## II Design and Implementation
The implementation of ClusterLog can be effectively broken into four parts. The first is rudimentary preprocessing, where the log content is extracted from the log files, resulting in only the natural language sequence of each log which can be matched throughout the log file to create a set of unique log keys. From here, the preprocessed log keys are fed into a pre-trained semantic similarity model to produce unique embeddings for each unique log key. Simultaneously (or in sequence) to the semantic similarity embedding step, the preprocessed log keys are fed into a pre-trained sentiment prediction model to result in a 0 or 1 prediction of each log's sentiment. The output of the semantic similarity embedding model and the sentiment prediction model are concatenated at this point and serve as the entire latent representation of each log key. Following the concatenation, the embeddings are fit to a clustering algorithm where the resulting cluster labels are used to replace the original sequence of log keys. Finally, the sequence of these cluster labels are fed
into the downstream sequential analysis algorithm. In our current implementation, we use DeepLog's sequence-learning part as the downstream algorithm. It is a two-layer LSTM network which is used to predict a probability distribution of the next log key given a specified window of previous logs. If the next log key is within the top candidates of this distribution, the sequence is considered normal, otherwise it is labeled as anomalous. The code base which implements the design described in this section is available using the following link: [https://github.com/DIR-LAB/ClusterLog](https://github.com/DIR-LAB/ClusterLog).
### _Preprocessing_
The first step of the ClusterLog implementation is rudimentary preprocessing of the raw log files. The goal of this step is to remove any excess characters and information from each individual log key, to result in a stripped key containing only the natural language sequence of the log which can be extracted as a unique log key and matched with many other logs in the original file. While it is ideal to reduce the log to its most generic natural language form to result in the smallest possible amount of unique log keys, an approximation of this will likely suffice if the former is too difficult to achieve. This is sufficient as the semantic embedding model will almost identically embed keys which are of the same logging template but are not a perfect match, meaning they will be clustered together even at the lowest of distance thresholds. As preprocessing can occasionally be very tedious, approximation at this step is highly valuable in regards to time and effort required to set up this approach compared to others. Depending on the file system which is being analyzed, the exact preprocessing steps may vary, but with the same goal in mind, preprocessing generally consists of extracting the log content, removing unique identifiers such as time stamps or Block IDs, removing unnecessary characters or character patterns, and unabbreviating words.
### _Semantic Embedding_
Once the logs have been preprocessed, the natural language sequence of each log is fed into a pre-trained semantic embedding model. Though a number of applicable models do exist, the most accurate embeddings were achieved using the all-mpnet-base-v2 [1] model which itself was pre-trained on the original mpnet-base [28] model, published in 2020, and further fine-tuned on over 2.1 Billion sentence pairs to optimize the model for semantic similarity. Though a further fine-tuning step specific to the semantic similarity among log keys would likely improve the semantic embedding quality even further, there is no known labeled dataset for this task, and creating labeled sentence pair data for semantic similarity is an arduous task. The output of this embedding model is a latent representation of the natural language contained within the log key of shape (768, 1) where the distance in the latent space between semantically similar sentences is close together, and those which are semantically diverse are far apart.
### _Sentiment Prediction_
While the semantic embedding model is valuable in that it creates a latent space where semantically (language-wise) similar sentences are close together and diverse ones are far apart, which can be used to cluster log keys reporting on the same or similar processes, these embeddings sometimes miss a valuable feature of natural language which is often included in logs. That feature is sentiment. A simple example of how classifying sentiment can add value to a semantic embedding is evident in the following two HDFS logs: Exception in receiveBlock for block and Receiving empty packet for block. These two logs are similar in terms of semantics using our pre-trained model because of their shared key words. However, the first one indicates an anomaly while the second does not. This presents a challenge when clustering solely based on semantic embedding.
To work around this kind of issue, we follow the idea of our recent SentiLog work which leverages the sentiment of file system logs [35]. Specifically, we reuse the pre-trained sentiment classification model from SentiLog on a set of neutral/positive and negative log keys, and concatenate the rounded (0 or 1) output of this model to the overall embedding of the log. Adding a sentiment dimension properly helps separate logs which may be semantically similar but opposite with regard to sentiment. Ultimately, the semantic embedding, including the con
catenated sentiment prediction, serves as a highly accurate latent representation of the log key which can confidently be used to cluster logs which are truly similar.
### _Clustering_
The ultimate goal of ClusterLog is to cluster the runtime logs into similar groups, so that we can use the same group ID to represent similar logs to reduce the complexity of the log sequences.
Following the semantic embedding and sentiment prediction steps, and their concatenation, the entire embedding is fed into a clustering algorithm in order to group the keys. However, there are a variety of problems associated with common clustering algorithms such as K-Means clustering when applied to this task. Initially, we ruled out K-Means primarily because of the need to specify the amount of centroids, which presents a major challenge as it makes the approach highly dependent upon the training data being fully representative of the logs during deployment as new, unseen, log keys would be forced to be grouped with an existing cluster, regardless of actual semantic and sentiment similarity. To add to this, finding an optimal number of centroids in K-Means based purely on the embedding data presents a challenge in and of itself as classic methods like the Elbow method are inconclusive on out our datasets.
In response to this, other clustering methodologies which create an arbitrary amount of clusters dependent on a specific distance threshold between clusters or points seem to be more suited. However, in practice, these too can be difficult to assess, as finding the correct hyper-parameters (i.e. the distance) for the given dataset is not always clear. But through extensive exploration, Density-Based Spatial Clustering of Applications with Noise (DBSCAN) [11] gave the most promising results when applied to a variety of file systems' logs. DBSCAN is simply described as a clustering algorithm which defines data points as core points, border points, and noise points based on _Epsilon_, a distance threshold, and _minimum samples_, both of which must be manually set.
The reason for choosing DBSCAN was primarily driven by the fact that it gave a more consistent, and often more valuable, insight on how to set the threshold hyper-parameter for good results. In contrast to KMeans and other classic algorithms, the epsilon parameter used by DBSCAN can be used to locate density gaps in the data as well as the amount of change required to overcome these gaps. Additionally, _Epsilon_ is highly intuitive to tune as it can be easily understood as modifying the radius around each point which is used to search for neighboring points. This means that when applied to any system, some analysis of how far apart embeddings of similar entries are can be used to gain meaningful insight as to what a good _Epsilon_ value may be. As shown in Figure 1 and 2, when plotting epsilon from 0.4 to 1 against the amount of clusters created by DBSCAN for HDFS and
Fig. 1: Number of generated clusters on _Lustre_ logs using different DBSCAN threshold.
Fig. 2: Number of generated clusters on _HDFS_ logs using different DBSCAN threshold.
Lustre, respectively, both graphs indicate noticeable ledges where multiple steps of epsilon do not result in any change in the amount of clusters found by DBSCAN. The thresholds at or around the longest of these ledges, provide a good baseline for setting the epsilon hyperparameter. Using a very minimal training and testing set allows us to verify such hyperparameter values.
With regard to the second hyperparameter which was mentioned, _minimum samples_, it was set to a constant value of 1. By setting minimum samples to 1, DBSCAN does not predict any of the data points as noise, but rather, any outliers simply become a unique cluster of just themselves. This was done for the simple reason that labeling all outlier points as - 1 to indicate noise effectively creates a meaningless cluster of completely unrelated log keys which are only grouped because they are too far from others. This severely impacts the ability for the sequential analysis algorithm to properly predict anomalies.
### _Sequential Analysis_
The final part of ClusterLog is the downstream sequential analysis algorithm. In the current implementation, we focus on comparing with a state-of-the-art log-based anomaly detection method, DeepLog. Hence, we use DeepLog's sequence-learning part as the downstream algorithm. It is based on a standard 2 layer LSTM architecture which uses ten sequential log keys as input to predict the probability distribution of the next log key. From this probability distribution, log keys within the top 9 are treated as part of a normal sequence, while other log keys in this distribution are treated as anomalies.
## III Evaluation
To evaluate the performance of ClusterLog, we conducted evaluations on two vastly different distributed storage systems: Apache HDFS and Lustre. The combination of these different evaluations demonstrates the generality of ClusterLog on both noisy Lustre logs and more structured HDFS logs. We will show that our approach will outperform existing solutions in both areas through granularity reduction.
### _Lustre_
#### Iii-A1 Dataset
Due to the lack of publicly available anomaly detection datasets for Parallel File Systems, the dataset utilized in this paper was generated and labeled via a process of fault injection. Specifically, the fault injection was simulated using PFault [6], an open-source fault injection repository for Parallel File Systems. The labeling of this data was done by treating data before fault injection as normal and having domain experts manually label the data following a fault injection to ensure its relevance with the injected anomaly. This process resulted in a dataset containing 150,473 normal logs and 7,401 abnormal, anomalous logs. Additionally, the total number of unique log key templates in this dataset equated to 73.
#### Iii-A2 Training and Testing
The training setup for evaluating ClusterLog against the Lustre dataset can be simply described as learning a portion of the normal sequence of logs (without anomalies) to show the sequence model how normal logs look, and then using that knowledge to see if anomalies can be detected in a sequence of logs containing both normal and anomalous logs. Both the train and test set are created by forming a sliding window of size 10 among the log key sequence, where the goal is to create a probability distribution for the next log key. If the next log key is not within the top candidates of the probability distribution it is counted as an anomaly while testing. A distinction to this setup is made when comparing it with NeuralLog as NeuralLog trains on a set of both normal and anomalous logs. Additionally, NeuralLog does not predict the next log in a sequence, but rather classifies a given sequence as anomalous or not. Based on these approaches, Accuracy, Precision, Recall, and F-measure are calculated.
While most of the training and testing analysis was done using 25 percent of the total normal logs, in order to test and compare the limits of generalizability, a further test was carried out in which a smaller amount of the entire dataset was used to train the sequence model. This test was carried out by using just 1 percent of the entire Lustre training set. This split resulted in 1,500 total samples.
### _HDFS_
#### Iv-B1 Dataset
Among the majority of recent anomaly detection works, the same HDFS dataset is most commonly referenced in their respective evaluations. This makes it a good benchmark comparison for new results. In contrast to logs contained in the Lustre log dataset, HDFS logs contain a specific block ID in addition to their log content which allows them to be grouped by session. Anomalies are labeled on a per-session basis rather than a per-log basis for this dataset. The labeling itself was carried out by domain experts who discovered and labeled sessions containing one of 11 types of anomalies. This resulted in a dataset with 11,197,954 messages, grouped into 575,139 sessions, of which 16,808 sessions were anomalous [32].
#### Iv-B2 Training and Testing
During training, the sequence model utilizes only the normal log sessions, and each key in each session is represented by the corresponding cluster calculated by DBSCAN. Much like what was done with Lustre, the sequence model learns normal behavior by using a window of 10 cluster IDs. During testing, the trained sequence model is run against both the normal and the anomalous log sessions. For each session, the sequence model utilizes the window of 10 keys to predicting a probability distribution for the next ID in the session. As opposed to Lustre, if the model predicts an anomaly, the session is labeled as an anomaly, instead of just the log key. If no anomalies are predicted within a given session, the session itself is predicted as being normal. This setup varied slightly in the case of Neuralog, as it did for Lustre, as Neuralog views each entire block and classifies the block as anomalous or normal.
### _Results Analysis_
#### Iv-C1 ClusterLog performance analysis with different settings and hyper-parameters
The first set of results, shown in Figure 3, represents the detailed performance of ClusterLog: 1) using numerous values for DBSCAN's epsilon parameter; 2) using sentiment dimension or not; 3) using variable or fixed number of candidates.
The leftmost graph in both of these rows shows the results of ClusterLog without the concatenation of the sentiment prediction to the semantic embedding. The middle graph shows ClusterLog's performance with the concatenation of the sentiment prediction to the semantic embedding. The right-most graph shows ClusterLog's performance with the concatenation of the sentiment prediction as well as using a variable number of candidates instead of a fixed number in the sequence model. Specifically, when the number of clusters descends below 27, the number of candidates used in the sequence model is set to the floor division of the number of clusters by 3 (eg. for 20 clusters, candidates would be set to 6). This feature was added to ClusterLog based on the intuition that when the amount of clusters becomes very low, using a relatively large amount of candidates may force the sequence model to consider a key of very low calculated probability to be normal, increasing the amount of false negatives.
From these results, we can clearly observe not only how ClusterLog can achieve its best results by reducing a large amount of noise with high clustering thresholds, but also that the concatenation of sentiment as well as the adaptation of candidates to account for a low cluster count add to ClusterLog's ability to produce strong and consistent results.
#### Iv-C2 Overall Comparison with DeepLog and NeuralLog
In order to provide a reference among state-of-the-art anomaly detection solutions, we compared ClusterLog's results with the results from DeepLog using the most similar training and testing setup possible. For Lustre, a sliding window of size 10 was used to predict a probability distribution among the 73 unique log key templates where a succeeding log key within the top 9 of predictions was considered normal and anything else was considered anomalous. For HDFS, the setup was the same, but because HDFS logs can be grouped into sessions using their block IDs, each session was individually used as a sequence, and the prediction of an anomaly within a session classified the entire session as anomalous.
The results shown in Figure 4 and 5 for HDFS and Lustre respectively, show ClusterLog's improvement upon previous SOTA results. In the case of HDFS, while DeepLog maintains a slightly higher Precision, ClusterLog boasts a more significant improvement in Recall, primarily because clustering the most similar logs on the HDFS dataset allows ClusterLog to more easily detect what is truly a normal sequence amidst some noise. NeuralLog
also has high precision and recall, but both of these values seem to be slightly outshined by ClusterLog and DeepLog on this train and test split. Overall, ClusterLog's very high precision and high recall boost its combined F1 score slightly higher than Deeplog's and NeuralLog's. However, due to the fact that the proposed method is not able to detect the correct F1 score, it is not possible to detect the correct F1 score.
Fig. 4: Performance comparison of ClusterLog, DeepLog, and NeuralLog on _HDFS_ dataset
Fig. 5: Performance comparison of ClusterLog, DeepLog and NeuralLog on _Lustre_ dataset
Fig. 3: number of clusters (right \(y\)-axis) and F-score of the prediction (left \(y\)-axis) of ClusterLog under different parameter settings for _Lustre_ (top) and _HDFS_ (bottom). Adding sentiment dimension (the middle figure) shows a clear improvement in F1 scores in a larger range of DBSCAN thresholds. Also, variable number of candidates further improves the F1 scores.
ability to group HDFS logs into sessions by their block ID, the nature of these sessions is already fairly low in noise. This means the application of ClusterLog will still be effective, as is shown, but the difference in performance will likely not be as large as it may be for more noisy systems. Because Lustre is a good example of one such noisy system, the comparison of results on this dataset provides evidence of this larger performance gap.
As shown by Figure 5, the disparity in precision between DeepLog and ClusterLog is very large while it seems both approaches have a very high recall. This is because DeepLog is not able to learn the more noisy sequence effectively, and as a result classifies the vast majority of normal and abnormal logs as anomalous. NeuralLog does a much better job of holding up against this dataset as its vector representations for logs are more robust than DeepLog's log indexes for noisy data, but as shown later, it takes much more labeled data to extract good performance from NeuralLog due to a very large set of trainable weights. ClusterLog was able to learn the sequence and accurately discern between normal and abnormal logs. In total, the results on both of these datasets in comparison to modern approaches verify ClusterLog's ability to generalize across different file systems and achieve strong results.
#### Iii-B3 The impact of training dataset size using ClusterLog
The final comparison for ClusterLog represents the impact of using a small training set on the ability of ClusterLog, DeepLog and NeuralLog to accurately predict anomalies on the more noisy Lustre dataset. This comparison was done on the Lustre dataset because the difference in the number of clusters used by ClusterLog and DeepLog on this dataset is large, so the impact of changing the training set size should be emphasized by this fact. As shown in Figure 6, the prediction results for DeepLog and NeuralLog are massively impacted as the proportion of the training set used shrinks. The alternative approaches' low performance on such a small dataset can be explained by the fact that the normal interactions between a higher granularity of keys are unlikely to be explained by such a small dataset. In contrast, the figure shows that ClusterLog can maintain the same level of results even having trained on just 1 percent of the training set (1504 logs). By reducing the number of keys through clustering, ClusterLog is much more likely to not only see all of the keys enough times but also learn the normal interactions between them, ultimately leading to much stronger results on small datasets.
## IV Related Works
There is a large catalogue of both preprocessing and anomaly detection approaches for file system logs in existence today [31, 30, 24, 13, 29, 27, 23, 37, 8, 26, 16, 25, 34, 10, 7, 19, 12, 22, 32, 33, 21, 20, 9]. Among preprocessing techniques, the primary approaches include frequent pattern mining, clustering, and heuristic methods. Frequent pattern mining approaches extract frequently occurring patterns, with the ultimate goal of grouping the logs which have the highest degree of similarity [30, 31, 24]. The clustering-based preprocessing techniques employ clustering algorithms to group similar logs. The individual approaches in this category generally differ in their approach towards measuring similarity, as some may employ weighted edit differences on log pairs [13], while others may fit a given set of raw logs to a predefined set of events [29]. The final category of preprocessing approaches is heuristic-based approaches. Among these, Drain [17] is the most popular and relies on a fixed depth tree to
Fig. 6: Performance comparison between ClusterLog, DeepLog, and NeuralLog using smaller training data. ClusterLog is much more stable even with less training data, showing the effectiveness of clustering logs.
extract log templates. ClusterLog belongs to the clustering-based method with new designs using semantic and sentimental features of logs.
Anomaly detection approaches can be categorized into non-machine learning and machine learning approaches. Of the more successful non-machine learning approaches, the rule-based approaches are well seen, where domain experts formulate a set of rules which can be used to classify logs as anomalies [8, 26, 16, 25, 10]. Among machine learning techniques, pattern-seeking [32, 33] and invariant mining [21] based algorithms have proven effective on a variety of file system logs, but their results do not hold up on the irregularity of logs which do not include session identifiers [35]. Additionally, These approaches do not hold up to more recent deep learning-based solutions, which learn sequences of logs using deep neural networks. The first approach, LogAnomaly[22], borrows from NLP by proposing a log key embedding technique which vectorizes log keys based on synonyms and antonyms and calculates similarity between vector embeddings to predict anomalies. An additional approach in this domain, LogBERT[14], utilizes the Bidirectional Encoders Represented by Transformers model which has provided STOA Results in multiple domains. In this approach, it is not the next log key that is predicted, but rather a given sequence that is masked and then classified as normal or abnormal. More recently, similar attention-based approaches have shown very strong results across a variety of Distributed File Systems. Among these approaches, LogRobust[36] and Nueralog[18] show the most promising results. While the encapsulated models are slightly different, both of these approaches utilize attention mechanisms to classify a sequence of logs represented by content embedding vectors. These approaches lend well to unseen logs. However, training accurate classification on such high dimensional embedding input requires a large amount of labeled training data. Additionally, the lack of grouping characteristics such as sequence identifiers in some logs and the resultant irregularity makes it difficult for these solutions to be effectively applied. ClusterLog is proposed to better work with these advanced models to achieve better performance and higher training efficiency.
## V Conclusion and Future Work
In this work, through ClusterLog, we have shown that granularity reduction in log key sequences is a viable approach to improve log anomaly detection in general. A key area where this method of anomaly detection outperforms previous approaches is in non-session parallel file system logs, which are highly irregular and noisy. Additionally, this work shows that granularity reduction in log sequences may allow users to reduce their effort by allowing for more lenient pre-processing, as well as smaller labeled datasets. This all is because of the effective reduction of noise and retention of important sequence information achieved by a good clustered representation of the file system log keys.
In the future, there are two primary areas in this work which are intended to be improved and many others which can be explored. To begin, due to ClusterLog's high dependence on the accuracy of sentence and sentiment embedding models, improvements in both of these areas could prove to be of value. The first direction of improvement is further exploration of clustering techniques to find an even more clear and easy-to-apply way of selecting the right clustering hyperparameters. The second direction is to further provide evidence for CLusterLog's generalizability by applying it to a larger scope of log systems. In addition to more parallel and distributed file systems, ClusterLog should prove its viability against other systems such as Kubernetes.
| Scalableファイルシステムの普及に伴い、高性能計算(HPC)のコンテキストにおいて、実行時にエラー検出の重要性は増加しています。しかし、現状では、多くの先進的なログベースの異常検出手法、例えばDeepLogは、多くの並列ファイルシステム(PFS)のログから適用される際に、多くの課題を克服する必要があり、その理由は時間軸に基づくログの不規則性と曖昧さを指すからです。これらの問題を回避するため、この研究は、時間的なシーケンスに基づいてログキーのセマンティクスに類似性を基づいてクラスタリングを行うログの前処理方法であるClusterLogを提案しています。セマンティクスと感情的に類似したログをグループ化することで、このアプローチは、最も少ないユニークなログキーでログシーケンスを表現することを目的としています。この方法が、後続するシーケンスベースのモデルがログパターンを効果 |
2303.00788 | Multi-task neural networks by learned contextual inputs | This paper explores learned-context neural networks. It is a multi-task
learning architecture based on a fully shared neural network and an augmented
input vector containing trainable task parameters. The architecture is
interesting due to its powerful task adaption mechanism, which facilitates a
low-dimensional task parameter space. Theoretically, we show that a scalar task
parameter is sufficient for universal approximation of all tasks, which is not
necessarily the case for more common architectures. Evidence towards the
practicality of such a small task parameter space is given empirically. The
task parameter space is found to be well-behaved, and simplifies workflows
related to updating models as new data arrives, and training new tasks when the
shared parameters are frozen. Additionally, the architecture displays
robustness towards cases with few data points. The architecture's performance
is compared to similar neural network architectures on ten datasets. | Anders T. Sandnes, Bjarne Grimstad, Odd Kolbjørnsen | 2023-03-01T19:25:52 | http://arxiv.org/abs/2303.00788v1 | # Multi-task neural networks by learned contextual inputs
###### Abstract
This paper explores learned-context neural networks. It is a multi-task learning architecture based on a fully shared neural network and an augmented input vector containing trainable task parameters. The architecture is interesting due to its powerful task adaption mechanism, which facilitates a low-dimensional task parameter space. Theoretically, we show that a scalar task parameter is sufficient for universal approximation of all tasks, which is not necessarily the case for more common architectures. Evidence towards the practicality of such a small task parameter space is given empirically. The task parameter space is found to be well-behaved, and simplifies workflows related to updating models as new data arrives, and training new tasks when the shared parameters are frozen. Additionally, the architecture displays robustness towards cases with few data points. The architecture's performance is compared to similar neural network architectures on ten datasets.
## 1 Introduction
A remarkable feat of nature is its ability to create endless variations on concepts that seem to be rigorously defined. Across all domains, we find classes of objects, clusters of phenomena, and groups of individuals, all of which seem to follow some overarching set of rules. And still, each separate instance puts a unique spin on the outcome. While this is fascinating to observe, it can be frustrating to model. One can quickly run into both performance issues and operational
challenges. An instance may have too few data points to produce a satisfying model. Or, there are just too many models to train and maintain.
Multi-task learning (MTL), as presented by Caruana (1997), is a learning strategy where multiple related models are trained simultaneously. The models share a subset of their parameters, which allows them to capture general domain knowledge. The purpose is to improve model generalization, by effectively training the shared parameters on more data.
Model frameworks based on the multi-task philosophy appear in many sciences. In the statistical literature they are, among others, referred to as mixed-, hierarchical-, multi-level-, or random-effect models. These methods see frequent use in sociology, economics, biometrics, and medicine (Demidenko, 2004; Raudenbush and Bryk, 2002). These models are often of moderate size and complexity. In the machine learning domain, there is broad diversity within transfer-learning and MTL (Lu et al., 2015; Zhang and Yang, 2021). Methods that combine transfer-learning or MTL with deep learning have seen success in several domains, with extensive research going into areas such as image analysis (Zamir et al., 2018; Kokkinos, 2017; Morid et al., 2021) and natural language processing (Raffel et al., 2020; Devlin et al., 2018; Brown et al., 2020). Engineering domains, such as solar and wind power (Wu et al., 2021; Dorado-Moreno et al., 2020), have also seen some MTL applications. However, in many cases, research is still needed to create satisfying solutions (Curreri et al., 2021).
Our focus is on non-linear regression problems. We are particularly interested in problems with many similar tasks that have complex input-output relationships. The tasks may have limited data. This class of problems captures many modeling challenges within engineering and industrial systems. Examples are cases with multiple instances of the same object, such as turbines in a wind farm, or batch processes, such as biomass accumulation in a fish farm or a field of crops. Here, each task typically has few observations, but they are, by design, quite similar. Solving such problems requires an architecture with a high degree of flexibility, where task adaptation can be done with few data points. Additionally, the architecture must facilitate the many operations needed to keep machine-learning models running in practice. Examples may be model updates in case of time-varying conditions, or the identification of new tasks that arrive at a later time.
We study _learned-context neural networks_. The architecture consists of two components. A neural network where all parameters are shared, and a set of task-specific parameter vectors. The task parameter vector serves as additional inputs to the neural network, which alters the network computation. This results in a powerful task adaptation mechanism, that can reduce the task parameter dimension significantly. The task parameter input provides contextual information about each task. They are referred to as a learned context because they are found during training. Learned contextual inputs facilitate the identification of a well-behaved task parameter space. By this, we mean a space that captures continuous latent properties of the tasks themselves, rather than just being a task encoding. A well-behaved task parameter space is desirable, because it enables us to train the shared network once, and then only
be concerned with the task parameter space for day-to-day operations. This is especially useful if a complete re-training takes significant manual effort, is computationally expensive, or new data points arrive frequently.
Variations of learned-context neural networks have, to our knowledge, only been applied in a meta-learning study by Zintgraf et al. (2019) and a virtual flow metering application by Sandnes et al. (2021). Zintgraf et al. (2019) proposed a model agnostic meta-learning method, which is concerned with context parameters in general. Learned-context neural networks are used as one of the concrete examples in an experimental study. Sandnes et al. (2021) use a variation of the architecture to study the benefit of multi-task learning for a soft sensing application. The focus is on the benefits of a multi-task architecture over the single-task learners traditionally used within the domain. The learned-context neural network itself has never been thoroughly analyzed.
### Contributions
We provide a deep dive into theoretical and practical aspects of the learned-context neural network architecture. We explore its task adaptation mechanism and relate it to existing methodology within statistics and machine learning, and we prove that a scalar task parameter is sufficient for the universal approximation of a set of tasks. The viability of such a low dimensional task parameter is studied empirically. The performance of the architecture is compared to similar architectures on ten datasets. Comparisons are made on predictive performance, sensitivity to dataset size, and the effect of the task parameter dimension.
### Outline
The content of the paper is organized as follows. Section 2 presents notation and gives an introduction to mixed models, multi-task learning, and methods related to the architecture of interest. Section 3 gives a theoretical description of learned-context neural networks. Section 4 dives deeper into the task adaptation mechanism of the learned-context network. Section 5 presents an empirical analysis of the architecture and compares is to related methods. Section 6 summarizes and discusses the findings, and Section 7 concludes the paper.
## 2 Background
Here we present a minimal background required to study learned-context neural networks. For a broader picture, see Demidenko (2004) for more statistical methods or Zhang and Yang (2021) for machine learning applications.
### Notation
We consider a set of \(m\) tasks \(\left\{(\mathcal{D}_{j},f_{j})\right\}_{j=1}^{m}\). Each task consists of a set of observations \(\mathcal{D}_{j}=\left\{(x_{ij},y_{ij})\right\}_{i=1}^{n_{j}}\) and a function \(f_{j}:\mathbf{R}^{d_{x}}\mapsto\mathbf{R}\). These are
related by \(y_{ij}=f_{j}(x_{ij})+\epsilon_{ij}\), where \(\epsilon_{ij}\sim\mathcal{N}(0,\sigma_{\epsilon}^{2})\). The tasks are homogeneous, with the elements of \(x\) and \(y\) representing the same quantities across all tasks. The indicator variable \(c_{j}\in\left\{0,1\right\}^{m}\) is a one-hot task encoding where the \(j\)th element is one and the rest zeros.
We are interested in the case of knowledge transfer through hard parameter sharing, and task adaptation is achieved using a small set of task parameters. We use \(\alpha\) to denote a set of shared parameters, and \(\beta_{j}\in\mathbf{R}^{d_{\beta}}\) to denote a vector of task specific parameters. The general form of the problem of interest is \(y_{ij}=f(x_{ij};\alpha,\beta_{j})+\epsilon_{ij}\).
### Mixed models and multi-task neural networks
The concept of multi-task learning has been studied extensively for linear models under a variety of names and contexts. The simplest form is a _varying-intercept_ linear model,
\[y_{ij}=a^{\top}x_{ij}+b+\beta_{j}+\epsilon_{ij}, \tag{1}\]
where each task is a linear function of the observed variables and tasks differ only in the individual bias terms \(\beta_{j}\)(Gelman et al., 2013). With such a setup it is common to assume task parameters are distributed according to \(\beta_{j}\sim\mathcal{N}(0,\sigma_{\beta}^{2})\). Task-specific slope parameters are also common. These models, known as multilevel-, hierarchical-, or mixed effect models, are extensively described in the statistical literature (Demidenko, 2004; Raudenbush and Bryk, 2002). An extension to the linear model is to factorize the parameters into a task component and a shared component, a notable example being the group-and-overlap model of Kumar and Daume III (2012), \(y_{ij}=a_{j}^{\top}x_{ij}+b_{j}+\epsilon_{ij}\), where the parameters are found as
\[\begin{bmatrix}a_{j}\\ b_{j}\end{bmatrix}=L\beta_{j}. \tag{2}\]
Task-specific parameters are linear combinations of latent basis tasks, given as the columns of \(L\). A tailored algorithm is used to jointly optimize the latent task matrix \(L\) and the individual task parameters. The optimization is controlled by the desired number of latent tasks and their sparsity. This structure allows the degree of sharing between tasks to be found during training. The mechanism introduces a robustness towards outlier tasks, because tasks that do not share any aspects with the others can be isolated to separate columns of \(L\). This allows the main tasks to be differentiated without inference from potential outliers.
Moving beyond linear model structures introduces several design choices. The simplest is fixed nonlinear analytical expressions where parameters can be partially shared, task-specific, or found through expressions such as Equation 2. A classic example is nonlinear growth curve models, commonly formulated as logistic curves (Demidenko, 2004).
Alternatively, the nonlinearities can be selected from a predefined set of candidates as proposed by Argyriou et al. (2008). This produces the expression
\(y_{ij}=\sum_{k=1}^{d_{\beta}}\beta_{j,k}\phi_{k}(x_{ij})+\epsilon_{ij}\), where the knowledge sharing mechanism lies in the shared choice of basis functions \(\phi_{k}\).
If the space of basis functions is not predefined, a more flexible solution is to allow them to be learned by a multi-task neural network (Caruana, 1997). This can be expressed as
\[y_{ij}=\beta_{j}^{\top}h(x_{ij};\alpha)+\epsilon_{ij}, \tag{3}\]
where \(h\) is a neural network with \(d_{\beta}\) outputs, parametrized by \(\alpha\). The task-parameters \(\beta_{j}\) represents a task specific output layer.
A different neural network strategy is the context-sensitive networks of Silver et al. (2008),
\[y_{ij}=h(z_{ij};\alpha)+\epsilon_{ij},\ z_{ij}=\begin{bmatrix}x_{ ij}\\ c_{j}\end{bmatrix}, \tag{4}\]
where all parameters of the neural networks are shared. Tasks are differentiated through the additional input \(c_{j}\), which is a one-hot task encoding. This leads to networks where the input dimension grows with the number of tasks. Context-adaptation, introduced by Zintgraf et al. (2019), replaces the one-hot encoding with a trainable vector of task parameters. This allows the dimension of the input space to remain fixed while the number of tasks grows. The learned-context networks presented here are based on this idea.
Other, more complex neural network architectures allow all network weights to have some level of task adaptation. Examples are tensor factorization (Yang and Hospedales, 2017), and masked networks (Mallya et al., 2018). Alternatively, one can introduce layers that facilitate sharing between otherwise disjoint neural networks (Misra et al., 2016). We are, however, focusing our attention on simple feed-forward architectures with few task parameters.
### Related learning paradigms
Multi-task learning resembles other learning paradigms, in particular _transfer-learning_ and _meta-learning_. The distinctions between these can be difficult, and their nuances have changed over time. We base our discussion on the recent survey of Hospedales et al. (2022).
Transfer-learning attempts to improve the performance of a task using information from related source tasks. This can, for instance, be achieved by copying an existing model trained on the source tasks and applying optional fine-tuning steps to the parameters. This tuning has no concern for the performance of the source tasks. In contrast, multi-task learning attempts to jointly optimize the performance of all tasks.
Meta-learning also attempts to jointly optimize all tasks, but the objective of the optimization is different. Multi-task learning is concerned with a fixed set of provided tasks and produces a single joint model. Meta-learning, on the other hand, attempts to improve learning for the whole distribution of tasks, which can include unseen future tasks. The output is a _learning procedure_ that
can be applied to all tasks in the task distribution. As such, meta-learning can draw parallels to _hyper-parameter optimization_. However, hyper-parameter optimization only considers performance on the current learning problem, while meta-learning considers the performance over a family of learning problems that may occur.
Special cases of these optimization problems may be closely linked in practice. Grant et al. (2018) connects a particular meta-learning algorithm to hierarchical Bayesian models and hyper-parameter optimization. Zintgraf et al. (2019) develops this meta-learning algorithm further with context-adaptation. The multi-task learned-context neural networks studied in this paper is similar to one of the context-adaptation mechanisms.
## 3 Model description
We explore a particular version the context-adaptation architecture, denoted by \(y_{ij}=f(x_{ij};\beta_{j},\alpha)+\epsilon_{ij}\). The architecture consists of a residual feedforward neural network, \(y_{ij}=h(z_{ij};\alpha)+\epsilon_{ij}\), which is shared between all tasks. Tasks are differentiated by trainable parameters \(\beta_{j}\), which is used as input to the network,
\[z_{ij}=\begin{bmatrix}x_{ij}\\ \beta_{j}\end{bmatrix}. \tag{5}\]
We refer to this architecture as a learned-context neural network.
### Residual feedforward networks
We consider feedforward neural network architectures (Goodfellow et al., 2016), with residual skip connections (He et al., 2016). In the following we omit the \(ij\) subscripts and noise terms to simplify the notation.
We denote a residual feedforward network by \(y=h(z;\alpha)\). It consists of \(K\) layers parametrized by \(\alpha=\left\{(W_{k},b_{k})|k=1,\ldots,K\right\}\). The network has three main elements. The first linear layer,
\[z^{(2)}=W_{1}z+b_{1}, \tag{6}\]
a sequence of residual blocks
\[z^{(k+1)} =W_{k}g\left(z^{(k)}\right)+b_{k}\] \[z^{(k+2)} =z^{(k)}+W_{k+1}g\left(z^{(k+1)}\right)+b_{k+1}\Bigg{\}}\,k=2,4, \ldots,K-2, \tag{7}\]
and the final linear layer
\[y=W_{K}z^{(K)}+b_{K}. \tag{8}\]
The residual blocks in Equation 7 is illustrated in Figure 1. We let the residual skip connection span two hidden layers, and use a are pre-activated structure (He
et al., 2016). All hidden layers have the same dimension. The ReLU function, \(g(z)=\max(0,z)\), where the max operation is performed element-wise, is used for activation.
### Comparable architectures
Throughout the analysis of the learned-context neural network, we will make comparisons with two closely related architectures. The first is the context-sensitive networks as presented by Silver et al. (2008), given in Equation 4. The second is the classic MTL architecture of Caruana (1997), which we will refer to as last-layer MTL. This is described by Equation 3.
## 4 Task adaptation mechanism
This section explores the theoretical aspects of the task adaptation mechanism in the learned-context neural network. First, the relationship between learned-context neural networks, linear mixed models, and context-sensitive networks are discussed. Then, a simple example illustrates how the task parameters can influence the output of the neural network. Finally, the universal approximation power of learned-context networks are investigated and shown to be equal to that of using a separate neural network for each task.
### Task parameter input yield varying-bias layers
Recall the residual neural network described in Equations 6-8. The augmented input to the first layer, given in Equation 5, consists of observations \(x_{ij}\) and task parameters \(\beta_{j}\). Inspecting the first linear layer, \(z_{ij}^{(2)}=W_{1}z_{ij}+b_{1}\), we note
Figure 1: Residual block spanning two hidden layers, each layer consisting of a activation function and a linear transformation. The block output is the sum of the block input \(z^{(k)}\) and the residual component \(\Delta^{(k)}\).
that partitioning the weight matrix according to \(W_{1}=\begin{bmatrix}A&L\end{bmatrix}\) yields
\[z_{ij}^{(2)}=Ax_{ij}+L\beta_{j}+b_{1}. \tag{9}\]
This is recognized as the linear varying-intercept model from Equation 1, where \(b_{1}\) is the population average intercept and \(\tilde{\beta}_{j}=L\beta_{j}\) is the individual intercept. However, linear varying-intercept models are usually concerned with scalar, or low dimensional, response variables. The hidden layer \(z_{ij}^{(2)}\) may be of high dimension, and the term \(L\beta_{j}\) allows for a low-rank approximation of the task intercept. This is the same mechanism utilized in the group-and-overlap method given in Equation 2, and the method inherits the benefits this provides in terms of outlier robustness.
On the other hand, if the number of task parameters equals the number of tasks, the task mechanism is similar to the one-hot encoding of context-sensitive networks. To see this, repeat the exercise using the augmented input from Equation 4. This leads to \(z_{ij}^{(2)}=Ax_{ij}+Bc_{j}+b_{1}\), where the contextual input selects the \(j\)th column of \(B\) as the task intercept. With the task parameter dimension equal to the number of tasks, each of the first layer biases can be set individually for each task. This essentially makes each task responsible for a number of parameters equal to the hidden layer size, which can be a significant number even for moderately sized networks.
### Task adaptation examples
Considering Equation 9 alone, the task adaptation mechanism may seem limited. We study a simple example to gain insight into how changing the biases affects the network response. The example is adapted from Telgarsky (2016). A network with three layers is constructed to linearly interpolate the points \((x,y)=(0,0),(0.5,1),(1,0),(1.5,1),(2,0)\), given task parameters equal to zero. We refer to this as the base shape. It is illustrated as the bold dark line in Figure 2. A detailed description of the network weights is given in Appendix A.
We now explore the effect of different values for the task parameter component \(L\) from Equation 9. Starting with the simple case where \(L=A\), which yields a translation of the base shape. This can be seen by manipulating Equation 6, \(z_{ij}^{(2)}=Ax_{ij}+A\beta_{j}+b_{1}=A\left(x_{ij}+\beta_{j}\right)+b_{1}\), which is the expression for translation \(y_{ij}=f(x_{ij}+\beta_{j})\). This is illustrated in the top part of Figure 2. A different transform is obtained by setting \(L=b_{1}\). For this particular network, it yields dilation by a factor \(1+\beta\). This changes the base shape of the output to a dilated shape according to \((x,y)\mapsto((1+\beta)x,(1+\beta)y)\). This is illustrated in the bottom part of Figure 2. The derivation of this result is given in Appendix A.1.
These simple examples illustrate how the contextual inputs can produce different transformations by changing the first layer weights. While looking limiting at first glance, the ability to influence the first layer bias creates changes that propagate through the entire network, enabling powerful nonlinear transformations.
Figure 2: Evaluation of the simple learned-context network described in Appendix A. The two examples only differ in their specification of the matrix \(L\) from Equation 9. Top is a translation case, with \(L=A\). Bottom is a dilation case, with \(L=b_{1}\). Both have been evaluated with task parameter \(\beta\) fixed at three different values, with the bold black line representing the base network.
### Universal task approximation
The previous section illustrates how a combination of contextual inputs and deep neural networks can produce complex task adaptations. Here we make this notion more precise by proving a multi-task equivalent to the universal approximation property. The main result is that any task adaption is achievable by learned-context neural networks regardless of how many task parameters are used, provided that the shared neural network is sufficiently wide and deep.
Universal approximation refers to the ability of neural network architectures to approximate a function with arbitrary precision, given a set of qualifiers on the function and the network (Cybenko, 1989; Hornik et al., 1989). Several variations of universal approximation properties have been shown for different neural network architectures, including ReLU-based feedforward networks (Lu et al., 2017; Kidger and Lyons, 2020). Here we explore how universal approximation properties carry over to the learned-context networks, and compare it to the properties of context-sensitive networks and last-layer MTL networks. We only consider traditional feedforward networks without the residual skip connections, as these do not change the results presented here.
We adapt the definitions from (Kidger and Lyons, 2020) and rely upon their main result for universal approximation. Our interest is to study the number of task parameters required, so the dimensions of the neural networks themselves are omitted.
**Definition 1**.: _Let \(\mathcal{F}_{k,l}\) be the class of functions described by feedforward neural networks with input dimension \(k\), output dimension \(l\), an arbitrary number of hidden layers of arbitrary size, where the ReLU activation function is applied to all hidden units._
**Definition 2**.: _Let \(C(K)\) be the space of continuous functions \(f:K\rightarrow\mathbf{R}\), with domain \(K\subseteq\mathbf{R}^{d_{x}}\), where \(K\) is compact._
Consider \(m\) tasks given by \(y=f_{j}(x),\ j=1,\ldots,m\), where \(f_{j}\in C(K)\). Individually, the functions \(f_{j}\) can be approximated arbitrarily close by a sufficiently sized neural network \(\hat{f}_{j}\in\mathcal{F}_{d_{x},1}\)(Kidger and Lyons, 2020). Extending this notion to MTL architectures means their ability to approximate the composed function
\[y=f(x,j)=\sum_{k=1}^{m}I(j=k)f_{k}(x). \tag{10}\]
Here, \(I(j=k)\) is the indicator function, tanking the value one if \(j=k\) and zero otherwise.
In Equation 10, the inputs of \(f(x,j)\) closely resembles those of context-sensitive networks, making this a natural starting point.
**Proposition 1**.: _There exists a context-sensitive neural network from the class \(\mathcal{F}_{d_{x}+m,1}\) that is arbitrarily close to the multi-task function \(f\) of Equation 10 with respect to the uniform norm._
Proof.: An equivalent computation to Equation 10 is \(f(x,c_{j})=\sum_{k=1}^{m}c_{j,k}f_{k}(x)\), where \(c_{j,k}\) is the \(k\)th element of \(c_{j}\). Relaxing the indicator variable domain to \(c_{j}\in[0,1]^{m}\), allows the context-sensitive input vector, \(z\), to exist in a compact subspace \(K^{+}\subseteq\mathbf{R}^{d_{x}+m}\). The relaxation gives \(f\in C(K^{+})\), a space of which the class \(\mathcal{F}_{d_{x}+m,1}\) is dense with respect to the uniform norm. It follows that context-sensitive networks can approximate the set of task functions arbitrarily well.
This result means that the context-sensitive architecture is sufficiently flexible to achieve the same approximation power as using an individual neural network for each task. However, it does not say anything about the number of parameters required to achieve this.
Learned-context neural networks can achieve the same universal approximation power as context-sensitive networks, using only a scalar task parameter.
**Theorem 2**.: _There exists a learned-context neural network from the class \(\mathcal{F}_{d_{x}+1,1}\) and a set of task parameters \(\beta_{j}\in\mathbf{R},\ j=1,\ldots,m\), that is arbitrarily close to the multi-task function \(f\) of Equation 10 with respect to the uniform norm._
Proof.: The proof is to construct the mapping \((x,\beta_{j})\mapsto(x,c_{j})\), as the first two hidden layers of a neural network. The remainder of the network could then be taken as a context-sensitive network.
First, we construct a similar triangle wave as in Figure 2 for the task parameters. Let the task parameters be assigned values \(\beta_{j}=j\), which is a trivial task encoding. The first layer is assigned the weights
\[W_{1}=1_{2m},\ b_{1}^{\top}=-\begin{bmatrix}1-\delta&1&2-\delta&2&\ldots&m- \delta&m\end{bmatrix},\]
where \(1_{2m}\) is a vector of \(2m\) ones and \(\delta\in(0,0.5)\) is a number determining the slope and spacing of the triangles. After ReLU activation, this gives a shifted sequence of the task parameter. The second layer is assigned the weights
\[W_{2}=\frac{1}{\delta}I_{m}\otimes\begin{bmatrix}1&-2\end{bmatrix},\ b_{2}=0_{m}\]
where \(\otimes\) denotes the Kronecker product and \(0_{m}\) is a vector of \(m\) zeros. This leads to the \(j\)th entry of \(g(z^{(3)})\) to be given by
\[g(z^{(3)}_{j})=\begin{cases}(\beta-j+\delta)/\delta&\text{if }\beta\in[j- \delta,j),\\ (-\beta+j+\delta)/\delta&\text{if }\beta\in[j,j+\delta),\\ 0&\text{otherwise }.\end{cases} \tag{11}\]
Only one of the entries of \(g(z^{(3)}_{j})\) will be non-zero at a time, and when evaluated at \(\beta=j\) the \(j\)th entry will be one and the rest zero. The sharpness of the transition between zero and one is adjusted by \(\delta\). The weights (\(W_{1}\), \(b_{1}\), \(W_{2}\), and \(b_{2}\)) can now be augmented to store a copy of the other input values \(x\), which makes the hidden layer \(z^{(3)}\) the same as the input layer of a context-sensitive network. Therefore, a learned-context network with a scalar task parameter share approximation properties with the context-sensitive networks from Proposition 1.
This result only considers the number of task parameters and does not put any bounds on the number of shared parameters. While a scalar task parameter is theoretically sufficient, it does not indicate that this is the ideal choice. Indeed, the proof is based on the scalar task parameter taking an indicator role, which is counterproductive to our desire to learn the latent properties of the tasks. This is useful to bear in mind when selecting hyperparameters, as too few task parameters may force an indicator behavior.
The last-layer MTL architecture is less flexible in its task adaptation mechanism. As such, it may require more task parameters than the learned-context networks.
**Proposition 3**.: _Last-layer MTL networks with base network from the class \(\mathcal{F}_{d_{x},k}\) and \(\beta_{j}\in\mathbf{R}^{k}\) require \(k\geq m\) to guarantee the existence of a function arbitrarily close to the multi-task function \(f\) in Equation 10._
Proof.: The last-layer network is a linear combination of \(n_{\beta}\) basis functions, \(y=\beta_{j}^{\top}h(x)\). Let the tasks be given as \(m\) sine waves \(y=\sin(\omega_{j}x)\), with frequency \(\omega_{j}=j\) and \(x\in[-\pi,\pi]\). These cannot be constructed by a set of basis functions of dimension lower than \(m\), because they are orthogonal to each other. Hence, in this case the shared neural network cannot have an output dimension less than \(m\).
Task adaption in the last-layer MTL architecture is through linear combination of features produced by the fully shared neural network. This means that it is limited to scaling the given nonlinearities. Contrary, in the learned-context architecture the task parameters can influence the nonlinearities themselves, which leads to a much broader range of adaptions it can achieve.
## 5 Empirical analysis
This section investigates the properties of learned-context neural networks by empirically. First, the learned-context neural network is compared to similar architectures. The architectures are evaluated on predictive performance, training robustness, sensitivity to the number of data points, and the effect of the number of task parameters. The task parameter space produced by learned-context neural networks is then explored by estimating task parameters for new tasks after the shared parameters are trained and fixed.
### Benchmark models
Learned-context neural networks, described in Section 3, are compared to learned-context networks are compared to last-layer multi-task network described by Equation 3, and context-sensitive network described by Equation 4. All three network models use the residual structure described in Equations 6-8.
A linear mixed-effect model acts as a baseline for performance. The linear models have a set of shared slope and intercept parameters. Additionally, each
task is has its own intercept. This structure is given in Equation 1. Discrete features are one-hot encoded for this model.
### Datasets
The models are compared on ten datasets. Two of the datasets are synthetically created to highlight differences between the architectures. Three datasets, Schools, Parkinson, and Sarcos, are common to the MTL literature (Zhang and Yang, 2021), while the last five are selected from a range of open access data sources. All datasets correspond to regression problems with a scalar response variable. The datasets are described below and summarized in Table 1.
**Frequency** is a synthetic dataset where tasks are sine waves of different frequency. Data is generated according to \(y_{ij}=0.5\sin(2\pi\omega_{j}x_{ij})+0.5+\epsilon_{ij}\). The task frequencies are sampled as \(\omega_{j}\sim\text{Uniform}(0.5,4.0)\). We take \(x_{ij}\sim\text{Uniform}(0,1)\), and \(\epsilon_{ij}\sim\mathcal{N}(0,\sigma^{2})\), \(\sigma=0.1\).
**Sine and line** is a synthetic dataset proposed by Finn et al. (2018). Tasks are taken from two classes, affine functions, \(y_{ij}=a_{j}x_{ij}+b_{j}+\epsilon_{ij}\) or sine waves, \(y_{ij}=c_{j}\sin(x_{ij}+d_{j})+\epsilon_{ij}\). We generate an equal number of tasks from each class, with parameters sampled as \(a_{j},b_{j}\sim\text{Uniform}(-3,3)\), \(c_{j}\sim\text{Uniform}(0.1,5.0)\), and \(d_{j}\sim\text{Uniform}(0,\pi)\). For both task types we use \(\epsilon_{ij}\sim\mathcal{N}(0,\sigma^{2})\), \(\sigma=0.3\), and \(x_{ij}\sim\text{Uniform}(-5,5)\). We note that this problem can be represented as linear regression, \(y=\beta^{\top}z\), on a set of nonlinear basis functions \(z^{\top}=\begin{bmatrix}1&x&\sin(x)&\cos(x)\end{bmatrix}\) by applying the trigonometric identity \(\sin(x+d)=\sin(d)\cos(x)+\cos(d)\sin(x)\).
**Schools** is a dataset of examination results for students from different schools over a three year period (Nuttall et al., 1989). The goal is to map features of the schools and students to student performance, in order to study school effectiveness. We treat each school as a task. The dataset is provided by Center of Multilevel Modelling (1987).
**Parkinson** telemonitoring relates patient observations to a disease symptom score (Tsanas et al., 2010). Each patient is considered a task. It is provided by UCI Machine Learning Repository (2009).
**Bike sharing** relates weather and calendar features to the number of trips of a bike sharing system over two years (Fanaee-T and Gama, 2014). Each month is considered a task, which allows tasks to capture seasonal effects and potential changes to the bike sharing system itself. Data is provided by UCI Machine Learning Repository (2013).
**Inflation** is a dataset from OECD (2022). It considers the development of Consumer Price Index (CPI) in 45 countries. CPI is taken quarterly from 1970 to 2020, normalized with 2015 being equal to 100% for each country. Each country is a task. Time is the only input variable.
**Obesity** is a dataset that describes the development of mean body-mass index from 1985 to 2019 in 200 countries (NCD Risk Factor Collaboration, 2020a). Input variables are age, sex, and year. The response variable is the percentage considered to be obese. Each country is a task. Data is provided by NCD Risk Factor Collaboration (2020c).
**Height** is a dataset with the same structure as Obesity, but the response variable is the mean height within the groups. Data is provided by NCD Risk Factor Collaboration (2020b).
**Death rate** describes the rate of death in different age groups. Input variables are age and sex. Each pair of year and country is considered a task. There are 183 countries and five years. This results in a greater number of tasks than Obesity and Height, but fewer input variables. Data is provided by World Health Organization (2020)
**Sarcos** is data from a robotic arm with seven joints. The goal is to map joint position, velocity, and acceleration to motor torque (Vijayakumar and Schaal, 2000). Each joint is one task, taking all 21 joint measurements as input to predict torque at the joint. As a consequence, each task has exactly the same input data, differing only in their response. It is hosted by Gaussian Processes for Machine Learning (2000).
### Training Procedure
Model parameters are found by minimizing a loss function of mean squared prediction error and parameter regularization (Hastie et al., 2009),
\[\min_{\alpha,\beta_{1},\ldots,\beta_{m}}\ \frac{1}{n}\sum_{j=1}^{m}\sum_{i=1}^{ n_{j}}\left(y_{ij}-f(x_{ij};\beta_{j},\alpha)\right)^{2}+\lambda_{\alpha}l( \alpha)+\lambda_{\beta}\sum_{j=1}^{m}l(\beta_{j}). \tag{12}\]
The prediction error is divided by the total number of data points \(n\). Parameters are regularized by the L2 norm, scaled by factors \(\lambda_{\alpha}\) and \(\lambda_{\beta}\) for shared and task parameters respectively. The task parameter term is ignored for context-sensitive networks. All data points from all tasks are weighted equally. More complex loss functions with individual weights for each task could potentially improve performance in some cases (Kendall et al., 2018; Gong et al., 2019), but this is not considered here.
\begin{table}
\begin{tabular}{l r r r r r} \hline Name & Num. feat. & Num. tasks & Data train & Data test & Std. \(y\) \\ \hline Frequency & 1 & 250 & 30000 & 25000 & 0.36 \\ Sine and line & 1 & 100 & 6000 & 10000 & 3.93 \\ Schools & 7 & 139 & 12339 & 3023 & 12.72 \\ Parkinson & 16 & 42 & 4717 & 1158 & 10.70 \\ Bike sharing & 6 & 24 & 13915 & 3464 & 181.38 \\ Inflation & 1 & 45 & 6684 & 1344 & 34.61 \\ Obesity & 3 & 200 & 126000 & 126000 & 5.22 \\ Height & 3 & 200 & 105000 & 105000 & 19.64 \\ Death rate & 2 & 915 & 16470 & 16470 & 0.12 \\ Sarcos & 21 & 7 & 311388 & 31143 & 18.84 \\ \hline \end{tabular}
\end{table}
Table 1: Summary of dataset properties. Listed is the number of features in the dataset, the number of tasks, the number of data points used for model training and testing, and the standard deviation of the response variable \(y\).
Optimization of Equation 12 is done by stochastic gradient decent (Botto et al., 2018), with a two stage learning rate schedule. Hyperparameters are optimized by the global optimizer LIPO (Malherbe and Vayatis, 2017), which is run for a predetermined number of iterations. Further details is given in Appendix B for the training procedure and Appendix C for hyperparameters.
### Base performance
The three neural network architectures and the linear mixed model are first compared in terms of test set errors. The results are summarized in Table 2. The neural network models have similar performance on all datasets, with learned-context and last-layer MTL being slightly favoured over context-sensitive networks in general. In some cases, such as Obesity and Height, we see that the context-sensitive architecture has close to twice the error of the other architectures. While this difference is significant, we note that when the errors are compared to the standard deviation of the response variables, given in Table 1, these errors are all quite small. For the synthetic datasets, learned-context network perform better on the Frequency data, and last-layer MTL network perform better on the Sine and line data. This is expected, because these architectures match the data generating processes. The linear mixed model performed well on two of the datasets, Schools and Parkinson. In these cases, the neural network models are able to achieve similar performance, which is reassuring.
To compare the robustness of the training procedure we re-run the training of all neural network models. We reuse the hyperparameters and run the training nine additional times. The best and worst RMSE values for the ten runs combined are given in Table 3. It also lists the number of times training runs diverged and had to be restarted. Overall the results are quite consistent. However, there are cases where the training appears less stable. An example
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Dataset & LC & CS & LL & LME \\ \hline Frequency & **0.106** & 0.136 & 0.122 & 0.360 \\ Sine and line & 0.325 & 0.34 & **0.316** & 3.843 \\ Schools & 10.203 & 10.363 & 10.314 & **10.108** \\ Parkinson & 2.914 & 2.856 & **2.670** & 2.776 \\ Bike sharing & 53.869 & 80.208 & **45.043** & 142.087 \\ Inflation & **1.833** & 2.526 & 2.501 & 11.095 \\ Obesity & **0.123** & 0.281 & 0.210 & 2.512 \\ Height & 0.394 & 0.61 & **0.347** & 5.055 \\ Death rate & **0.011** & 0.017 & 0.026 & 0.078 \\ Sarcos & **2.176** & 2.188 & 2.482 & 10.653 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Root mean squared error on test data for all models. The best score on each dataset is highlighted. The column headers are learned-context neural networks (LC), context-sensitive networks (CS), last-layer MTL networks (LL), and linear mixed effect models (LME).
being last-layer MTL on the Height dataset, which yield a large span in relative errors. Again we note that these errors are small in absolute value, which makes such comparisons sensitive to the randomness in the training procedure.
### Effect of dataset size
We now compare the sensitivity to dataset size, by training the neural network architectures on reduced datasets. The training datasets are cut to 50% and 10% of their original size. The same fraction of data is removed from each task, keeping the data balance from the original dataset. Test sets are kept at full size. Training and hyperparameter searches are conducted in the same way as for the full data case. The results are summarized in Table 4. Context-sensitive networks are on average slightly behind the others on all data sizes. A reduction in data size naturally leads to a reduction in performance for all models, but the learned-context architecture is less sensitive to this than the others.
### Effect of task parameter dimension
Section 4.3 established theoretical differences in the number of task parameters required by learned-context networks and last-layer MTL networks. In this section we explore the practical impact of the task parameter dimension. To this end, learned-context networks and last-layer networks are trained on all datasets with different number of task parameters. All other hyperparameters are fixed. The models are trained on the full dataset. Figure 3 illustrates the results. Additional task parameters generally improve performance of both models on all datasets, up to a certain point. There does not seem to be a significant downside of excessively large task parameter dimensions. This means the hyperparameter searches will likely arrive at a larger than necessary value,
\begin{table}
\begin{tabular}{l l l l} \hline Dataset & LC & CS & LL \\ \hline Frequency & 1.00, 1.01 & 1.10, 1.29 (1) & 1.16, 1.21 \\ Sine and line & 1.00, 1.00 & 1.04, 1.06 & 0.97, 0.97 \\ Schools & 1.00, 1.00 & 1.01, 1.02 & 1.01, 1.02 \\ Parkinson & 0.95, 1.00 & 0.92, 0.98 & 0.90, 0.94 \\ Bike sharing & 0.94, 1.01 & 1.29, 1.51 & 0.82, 0.85 \\ Inflation & 0.98, 1.09 & 1.33, 1.58 & 1.35, 1.38 \\ Obesity & 0.94, 1.08 (1) & 2.10, 2.45 & 1.30, 1.86 \\ Height & 0.99, 1.14 & 1.53, 1.65 & 0.82, 1.37 (1) \\ Death rate & 0.99, 1.04 & 1.52, 1.62 & 2.24, 2.30 \\ Sarcos & 1.00, 1.04 & 0.99, 1.05 (1) & 1.11, 1.49 \\ \hline \end{tabular}
\end{table}
Table 3: Results from ten repeated training runs with the same hyperparameter settings. Reported are the min and max relative RMSE value in each case. The values are normalized by the performance of the learned-context neural network in Table 2. The number of diverging training runs, if any, are given in brackets.
unless additional model selection penalty terms are introduced. Overall, the learned-context neural networks achieve better performance with fewer task parameters, but the gap is closed as the number increases.
As noted in Section 5.2, the sine and line dataset is easily represented as regression on four nonlinear basis functions. This is reflected by both models performing identically for four or more task parameters. The frequency dataset does not map to linear regression in the same way. As a consequence, the last-layer MTL network requires a significantly higher number of task parameters to achieve the same performance as the learned-context neural network.
A benefit of a low-dimensional task parameter space is that it simplifies visualization and interpretation of the task parameters. Examples of this are given in Appendix D, where the models for Inflation, Bike sharing, and Death rate datasets are studied in detail. The task parameters are given sensible interpretations related to properties of the model domain.
### Hold-out task performance
A key aspect differentiating the learned-context from a fixed encoding, such as the context-sensitive neural networks, is that similar tasks can be assigned similar task parameter values. A desirable feature would be that the parameters capture latent properties of the tasks. For this to happen, the task parameter space must in some sense be well-behaved. A qualitative study, given in Appendix D, supports the hypothesis that learned-context neural networks facilitate such behavior, and the task parameters capture properties fundamental to the domain, as opposed to separating tasks by memorization.
This section attempts to quantify this behavior by studying the viability of estimating parameters for a new task after the shared neural network parameters are trained and fixed. This is similar to meta-learning. However, as discussed in Section 2.3, meta-learning takes this generalization into account in the original
\begin{table}
\begin{tabular}{l|l l l|l l l|l l l} \multicolumn{4}{c}{100\% training data} & \multicolumn{2}{c}{50\% training data} & \multicolumn{2}{c}{10\% training data} \\ \hline Dataset & LC & CS & LL & LC & CS & LL & LC & CS & LL \\ \hline Frequency & **0.29** & 0.37 & 0.33 & **0.31** & 0.36 & 0.37 & **0.49** & 0.87 & 0.53 \\ Sine and line & 0.08 & 0.09 & **0.08** & 0.09 & 0.11 & **0.09** & **0.25** & 0.40 & 0.50 \\ Schools & **0.80** & 0.81 & 0.81 & **0.84** & 0.84 & 0.93 & **0.88** & 0.98 & 1.09 \\ Parkinson & 0.27 & 0.27 & **0.25** & 0.28 & 0.27 & **0.26** & 0.37 & **0.35** & 0.42 \\ Bike sharing & 0.30 & 0.44 & **0.25** & 0.37 & 0.56 & **0.28** & **0.54** & 0.74 & 0.70 \\ Inflation & **0.05** & 0.07 & 0.07 & **0.06** & 0.08 & 0.10 & **0.10** & 0.13 & 0.13 \\ Obesity & **0.02** & 0.05 & 0.04 & 0.04 & 0.05 & **0.02** & 0.04 & 0.10 & **0.02** \\ Height & 0.02 & 0.03 & **0.02** & 0.01 & 0.03 & **0.01** & 0.03 & 0.04 & **0.01** \\ Death rate & **0.10** & 0.15 & 0.22 & **0.12** & 0.17 & 0.19 & **0.20** & 0.30 & 0.58 \\ Sarcos & **0.12** & 0.12 & 0.13 & **0.12** & 0.12 & 0.16 & 0.13 & 0.12 & **0.10** \\ \hline \end{tabular}
\end{table}
Table 4: Relative test errors for models trained on reduced datasets. Errors are normalized by the response standard deviation from Table 1.
Figure 3: Average test data error as function of the number of task parameters. Learned-context neural networks is given in blue circles and last-layer MTL networks in green diamonds. Task parameter dimensions are set to 1, 2, 4, 8, and 16. Errors are normalized using the learned-context error from Table 2. A dotted line at one marks the base performance.
optimization objective. We only study these properties as a consideration _after_ the model has been trained using conventional multi-task learning.
For these experiments, task are divided into two groups. The base group are the tasks that participate in training of the shared parameters, by minimizing Equation 12. The hold-out group are tasks that arrive after the shared parameters are fixed. Hold-out tasks are considered one at a time. The loss function for hold-out task \(j\) is
\[\min_{\beta_{j}}\ \frac{1}{s^{2}}\sum_{i=1}^{n_{j}}\left(y_{ij}-f(x_{ij};\beta_ {j},\alpha)\right)^{2}+\beta_{j}^{\top}D^{-1}\beta_{j}. \tag{13}\]
This is equivalent to the maximum a posteriori estimate of the task parameters. The task parameter prior is \(\beta_{j}\sim\mathcal{N}(0,D)\), where \(D\) is found from the distribution of task parameters in the base group. The log-likelihood term is scaled by \(s^{2}\), which is the test error variance for the base group.
A critical part of parameter estimation is the shape of the likelihood function. Using the Frequency dataset, we compare the hold-out task likelihood with the true data-generating process, which in this case is known. The result is given in Figure 4. Both parameter spaces display the same behavior for the different sets of data points. The modes of the likelihood develop predictably with additional data. This is evidence of a well-behaved task parameter space, and that the task parameters can capture properties that are fundamental to the problem, rather than just being a complex task encoding. For the synthetic Frequency case, we were free to generate a sufficiently large number of tasks and data points for this to be possible. The extent to which such a relationship between task parameters and underlying task properties can be identified is naturally dependent on the particular problem being studied.
To further investigate the hold-out task viability, an experiment is contucted on all datasets. For each dataset, tasks are randomly divided into three approximately equal groups. One group is selected as the hold-out group, while the other two constitute the base group. First, a learned-context neural network is trained using the base group tasks. Task parameters for the hold-out group is then estimated one task at the time, with the shared parameters frozen. The process is repeated using all groups as the hold-out group, and results are averaged over all iterations. The loss in Equation 13 is minimized using the LIPO optimizer (Malherbe and Vayatis, 2017).
The results are summarized in Figure 5 and Table 5. The outcome of this experiment will naturally vary with the datasets. In cases with few tasks or where tasks show a greater degree of uniqueness, it will be hard to identify sensible parameters for the hold-out tasks. This can be seen clearly for the Sarcos dataset, which only has seven tasks, and the tasks are quite different. For other cases, such as the Frequency dataset, there are many similar tasks, and the hold-out tasks are much more likely to resemble the base tasks. Inspecting Table 5, the hold-out task performance is never able to match the baseline from Table 2, but it is able to stay below a 20% error increase for half of the datasets. In real scenarios were new tasks appear, this leads to a trade-off between retraining
Figure 4: Task parameter likelihood functions for a hypothetical new task constructed for the Frequency dataset. The task has \(\omega_{j}=1.5\), and up to four data points. The left column is the likelihood of the true data-generating function, which is a sine wave parametrized by its frequency. The middle column is the likelihood of the learned-context neural network, with one task parameter. The right column is the underlying function in black, the data points in orange, and the learned-context network evaluated with task parameters sampled from the likelihood in blue. The model is taken from the experiment in Section 5.6. The rows show the likelihoods for a different number of data points.
the full model for a potential performance gain, or just estimating the new task parameters to save time and resources. As seen in Figure 5, it is beneficial to have more data, but the amount required for satisfying parameter estimation can vary greatly.
We note that when using fewer data points for the hold-out tasks, a lower task parameter dimension can be advantageous. This is observed in the Obesity and Height datasets, where all three task parameter dimensions yield similar performance using the full dataset, but the smaller task parameter spaces become increasingly favoured with fewer data points. Compare this to the results observed in Figure 3, where it was found that an increasing task parameter dimension was favourable when all tasks were trained simultaneously. The optimal choice of task-parameter dimension is then up to the specific use case.
## 6 Summary and discussion
The theoretical and empirical results show that the learned-context neural network is a favorable architecture for MTL problems with similar tasks, and where tasks may have few data points. Its ability to learn a small and well-behaved task parameter spaces can be particularly advantageous for such problems.
Theoretically, scalar task parameters were shown to be sufficient for a learned-context neural network to achieve universal approximation of all tasks (Section 4.3). The contextual inputs facilitates complex task adaptations even for simple constructed networks (Section 4.2). The ability to adapt to different tasks naturally increases with the size of the neural network. A potential downside to this flexibility is the possibility of over fitting to the individual tasks, which counters the desirable properties of multi-task learning. This puts emphasis on careful hyperparameter selection.
Experimentally it is seen that the ideal number of task parameters varies between problems, but the architecture can generally achieve a large degree of
\begin{table}
\begin{tabular}{l r r r} \hline Dataset & \(d_{\beta}=2\) & \(d_{\beta}=4\) & \(d_{\beta}=8\) \\ \hline Frequency & **1.07** & 1.56 & 1.4 \\ Sine and line & 1.28 & **1.26** & 1.72 \\ Schools & 1.06 & **1.04** & 1.05 \\ Parkinson & 1.11 & 1.05 & **1.01** \\ Bike sharing & 1.27 & **1.18** & 1.23 \\ Inflation & 2.08 & **1.74** & 2.9 \\ Obesity & 4.17 & **2.77** & 9.74 \\ Height & 2.18 & **1.67** & 2.4 \\ Death rate & **1.02** & 1.25 & 1.88 \\ Sarcos & **3.81** & 4.89 & 9.71 \\ \hline \end{tabular}
\end{table}
Table 5: Mean test error for hold-out tasks when trained on 100% of the training set. Errors are normalized using the learned-context network performance from Table 2.
Figure 5: Average test set errors for hold-out tasks. Errors are normalized using the learned-context network performance from Table 2. Hold-out tasks are trained on 1%, 5%, 10%, 25%, 50%, 75%, and 100% of the task data, and evaluated on the full test set. Models with 2, 4, and 8 task parameters are used.
task adaptation with only a few task parameters (Section 5.6). Increasing the task parameter dimension is observed to have a beneficial effect in cases where all tasks are trained simultaneously (Section 5.7). However, if the shared network model is to be used for new tasks, then a smaller parameter space may be preferable. The ideal task parameter dimension will likely have to be set by the practitioner in light of domain knowledge and the desired application of the model. The architecture facilitates task parameters that capture latent properties of the tasks (Section 5.7 and Appendix D), which can enable convenient workflows for updating and maintaining such models in practice.
Learned-context neural networks performed similarly to the benchmark architectures on the full datasets (Section 5.4). On the reduced datasets the learned-context neural network had less performance deterioration than the others (Section 5.5).
Training of the learned-context networks appears to be robust, and comparable to the other architectures discussed (Section 5.4). The construction used to prove Theorem 2 gives a motivation for initializing the task parameters to zero. Randomly initialized task parameters would have a higher chance to get stuck in local minima with "task-encoding" properties. Zero initializing, on the other hand, encourage similar task to follow the same trajectories during optimization, which enables a grouping of similar tasks. This likely promotes a well-behaved parameter space, and reduces the chance that multiple regions represents the same phenomena.
## 7 Conclusion
The learned-context neural network is a simple, but powerful, multi-task learning architecture. Its properties make it well suited to problems with moderate amounts of data, and in situations where a low-dimensional, well-behaved task parameter space is beneficial for application and analysis of the model. The task adaptation mechanism yields universal approximation capabilities with only a single task parameter. Empirical studies show that the ideal task parameter dimension varies with the domain and model application, but the number of required task parameters is generally lower than that of comparable architectures.
## Acknowledgements
This work was supported by Solution Seeker Inc. and The Research Council of Norway.
Example network
The following describes the neural network example used in Section 4.2. The input vector is \(z^{\top}=\begin{bmatrix}x&\beta\end{bmatrix}\), with both \(x\) and \(\beta\) scalar. The network has three layers, with weights
\[W_{1} =\begin{bmatrix}1&L_{1}\\ 1&L_{2}\\ 1&L_{3}\\ 1&L_{4}\end{bmatrix},b_{1}=-\frac{1}{2}\begin{bmatrix}0\\ 1\\ 2\\ 3\end{bmatrix},\] \[W_{2} =\begin{bmatrix}2&-4&0&0\\ 0&0&2&-4\end{bmatrix},b_{2}=0,\] \[W_{3} =\begin{bmatrix}1&1\end{bmatrix},b_{3}=0.\]
For simplicity, we ignore the residual connections in this case. Recall that we only consider the ReLU activation function. The first layer maps the inputs to a four dimensional hidden layer. Ignoring the task parameter, this mapping creates four identical unit slopes starting at 0.0, 0.5, 1.0, and 1.5. The second layer add pairs of the previous hidden layer together, creating two triangle shaped responses as the second hidden layer. These are added together in the third layer. The output is seen as the black lines in Figure 2.
### Derivation of dilation effect
We show that setting \(L=b_{1}\) in the network above creates a dilation effect. This was explored in Section 4.2. To simplify the analysis we take \(\beta\in(-1,1)\). Let \(z_{p}^{(2)}=x+L_{p}\beta+b_{1,p}\) be the \(p\)th element of the first hidden layer. Substituting \(L_{p}=b_{1,p}=-(p-1)/2\), we get
\[g(z_{p}^{(2)})=\begin{cases}x-\frac{1+\beta}{2}(p-1)&\text{ if }x\geq\frac{1+ \beta}{2}(p-1)\\ 0&\text{ otherwise.}\end{cases}\]
Continuing the notation for the second hidden layer, we get
\[g(z_{1}^{(3)}) =\begin{cases}0,&\text{ if }x<0,\\ 2x,&\text{ if }x\in\left[0,\frac{1+\beta}{2}\right),\\ -2x+2(1+\beta),&\text{ if }x\in\left[\frac{1+\beta}{2},1+\beta\right),\\ 0,&\text{ if }x\geq 1+\beta,\end{cases}\] \[g(z_{2}^{(3)}) =\begin{cases}0,&\text{ if }x<1+\beta,\\ 2x-2(1+\beta),&\text{ if }x\in\left[1+\beta,3\frac{1+\beta}{2}\right),\\ -2x+4(1+\beta),&\text{ if }x\in\left[3\frac{1+\beta}{2},2(1+\beta)\right),\\ 0,&\text{ if }x\geq 2(1+\beta),\end{cases}\]
The output is then given as \(y=g(z_{1}^{(3)})+g(z_{2}^{(3)})\), which is a piecewise linear function interpolating the points \((x,y)=(0,0),(0.5(1+\beta),1+\beta),(1+\beta,0),(1.5(1+\beta),1+\beta),(2(1+\beta ),0)\). This is equivalent with a dilation of both \(x\) and \(y\) with a factor \(1+\beta\).
## Appendix B Optimizer and learning rate schedule
All neural networks are implemented and trained with PyTorch (Paszke et al., 2019). They are trained on a single GPU. Optimization is done by stochastic gradient decent with momentum (Bottou et al., 2018) and a learning rate scheduler.
The learning rate scheduler has two stages. Starting at \(1^{-8}\), it is first increased linearly during a warm-up stage (Nakamura et al., 2021; Arpit et al., 2019) until it reaches peak learning rate.
The second stage is to train the model over several epochs until the loss converges(Chee and Toulis, 2018; Lang et al., 2019), at which point the learning rate is reduced by half. This is repeated until the learning rate is reduced back down below \(1^{-8}\), the maximum number of epochs is reached, or the loss is sufficiently close to zero. Loss convergence is determined by inspecting a window of the last epoch losses. A linear regression model with slope and intercept is fitted to the loss values. This is compared to a model with intercept only, using a likelihood ratio test (Wilks, 1938). Convergence is flagged if the test is above 0.51, which is an aggressive threshold. The test is implemented in Statsmodels (Seabold and Perktold, 2010). The new learning rate is kept for a minimum number of epochs equal to the window size.
The number of epochs and data batches vary with dataset size. For data sets with less that 100 000 data points we use 10 000 epochs of two batches, otherwise we use 1000 epochs of 20 batches. The warm-up stage is equal to 10% of the maximum allowed epochs, and the loss convergence is found with a window size equal to 1% of the epochs. Peak learning rate and momentum are treated as hyperparameters.
Neural network parameters are initialized by the He initializer (He et al., 2015). Task parameters are initialized to zero.
Linear mixed effect models are implemented and optimized with Statsmodels (Seabold and Perktold, 2010).
All data is transformed to approximately unit scale before training and evaluating models. Still, all figures and errors are given in the original units, unless stated otherwise.
## Appendix C Hyperparameters
All three neural network architectures require the number of residual blocks, hidden layer size, and network parameter regularization factor as hyperparameters. They also required the peak optimizer learning rate. Additionally, learned
context networks and last-layer MTL networks require the number of task parameters and task parameter regularization factor.
Hyperparameters are optimized using the training data in Table 1. The training data is split into two parts, using one part for training and one part for validation during the search. A final model is then trained on all training data using the hyperparameter configuration with the best validation error. Optimization is done by a variant of the LIPO solver (Malherbe and Vayatis, 2017) implemented in dlib (King, 2009). The search runs for 25 iterations in all cases.
The range of values explored in the searches varies by datasets. Details are summarized in Table 6. To limit the number of hyperparameters, momentum is fixed to 0.7 and the number of residual blocks is fixed to two for all datasets and architectures. These values are found as reasonable choices for most cases.
Due to a large number of experiments and hyperparameter searches, we observe that some searches arrive at an optimal learning rate that is too high to be used in training the final model. It can be due to randomness in weight initialization and batch samples that allowed a high learning rate to succeed during the trials, only to diverge during final training. To address this, we multiply the peak learning rate by 0.9 and retry the training if the loss diverges.
Full hyperparameter searches are conducted for the experiments in Section 5.4 and Section 5.5. For experiment in Section 5.6, the hyperparameters are copied from the final models in Section 5.4. For the experiment in Section 5.7, the hyperparameters are copied from the experiment using 50% training data in Section 5.5.
## Appendix D Qualitative study of task parameters
This section provides additional visualizations of the learned-context neural networks trained in Section 5. The intention is to give further insight into the qualitative behavior of the task parameters. We study the Inflation, Bike sharing, and Death rate datasets because they have a low dimensional input space that is convenient to visualize. The models with a scalar task parameter from Section 5.6 are used for all visualizations.
Figure 6 illustrates the Inflation data and the fitted model. The model
\begin{table}
\begin{tabular}{l l l} \hline Name & Min & Max \\ \hline Peak learning rate & \(10^{-4}\) & 1.5 \\ Hidden layer size & 50 & 500 \\ Shared parameter regularization & \(10^{-15}\) & \(10^{-5}\) \\ Task parameter regularization & \(10^{-15}\) & \(10^{-3}\) \\ Number of task parameters & 1 & min(25, \(m\)) \\ \hline \end{tabular}
\end{table}
Table 6: Summary of hyperparameter search space. The number of task parameters is limited to the number of tasks \(m\) in the dataset, up to a maximum of 25.
appears as a smoothed version of the data. The task parameters range from -0.2 to 0.2. Higher task parameter values seem to represent countries where the increase begins in the 1990s, and lower values represent countries where the increase is well underway in the 1970s. The transition between these two categories is highly non-linear. For a task parameter equal to -0.2, the curve is initially steep and flattens towards the end. The curve then gradually transitions to an almost linear trend for values around -0.1. For even higher values, there curve starts with a flat phase that eventually ramps up, with the start time increasing along with the task parameter.
The bike-sharing dataset relates the number of bike trips to weather, calendar features, and time of day, over two years. Each month is a task, yielding 24 tasks. Figure 7 illustrates the average number of bike trips and the task parameters for each month. There is an increasing number of trips over the two years, combined with a seasonal effect of more activity during the summer months. The increase in activity in the second year may be due to an increase in the capacity or popularity of the system. The task parameters nicely follow the average trips taken during peak hours. A possible interpretation is that the task parameters capture some latent state of the system, such as the number of bikes being distributed. We emphasize that while the parameters are visualized as a connected line, they are _not_ regularized to follow a time-dependent pattern. All parameters are assumed to be independent and centered around zero. However, a time-dependent regularization between the task parameters could be considered if the model were to be used in practice.
The death rate dataset studies the rate of death at different ages, as a
Figure 6: Inflation dataset. The left plot is the data points, where each line is the normalized inflation rate for a country (recall that each country is a task). The right plot is the fitted models evaluated at the data points. In both plots, the lines are colored by the value of the task parameter for that country.
function of country, year, and sex. Age is grouped in intervals of five years, and the maximum age included is 80. All ages above 80 are grouped together with a death rate of one, which has been removed from the dataset in this study. The data is given in five different years, 2000, 2005, 2010, 2015, and 2019. Each country and year is modeled as an individual task, making each country associated with five tasks. This relationship is _not known_ by the model. Figure 8 illustrates the task parameters, with six countries highlighted. The year 2010 is investigated in detail in Figure 9, which illustrates the data and fitted models for the highlighted countries in Figure 8. It seems that lower task parameter values relate to higher death rates in general. For most countries the task parameters seem to be incrementally increased with the years, indicating a decrease in death rates for ages below 80. For instance, task parameters for Ethiopia (ETH) show a steady increase over time. This correlates with the increase in life expectancy at birth observed over the same period (GBD 2019 Ethiopia Subnational-Level Disease Burden Initiative Collaborators, 2022).
Haiti (HTI) 2010 stands out in Figure 8. This is due to the devastating earthquake of January 2010, which had a large death toll (Cavallo et al., 2010). In Figure 9 we see that this leads to a curve shape that is quite unique, with an elevated death rate across all ages. In this case, the other tasks fails to supply the information missing from the training data, and the resulting model overshoots on the test data. For the other tasks, the gaps left by the test data are covered nicely by related tasks. For instance, the female death rate in Zambia (ZMB) is almost perfectly captured with only six data points available for training, of which only one represents ages 50 and above.
Figure 7: Bike sharing dataset. The top plot is the average number of trips at different times of day for each month, with the standard deviations given as transparent bands. This plot only includes data from workdays. The bottom plot is the identified task parameter for each month.
Figure 8: Death rate dataset. Visualization of task parameters. Each combination of country and year is an independent task in the model formulation, but tasks from the same country are connected by a line. Six countries are highlighted in colors.
Figure 9: Death rate dataset. Data and fitted models for six countries for the year 2010. The country and corresponding task parameters for this year are given in the titles. Data is given in black and colored markers, where black is the training data and colored is the test data. Circle markers are used for males and square markers for females. Fitted models are given in dashed lines for males and solid lines for females.
We emphasize that the task parameters are not forced into the relationships seen in Figure 6, Figure 7, and Figure 8. The continuous and interpretable task parameters are only motivated by regularization towards zero. The discovered structures are due to the information in the training data.
| この論文では、学習済み文脈ニューラルネットワークを探索しています。これは、完全な共有ニューラルネットワークと学習可能なタスクパラメータを含む拡張入力ベクトルに基づいたマルチタスク学習のアーキテクチャです。このアーキテクチャは、強力なタスク適応メカニズムによって注目されます。このメカニズムは、低次元タスクパラメータ空間を容易にします。理論的には、私たちが示すことによると、1つのスカラータスクパラメータがすべてのタスクのユニバーサル近似に十分である、これは一般的なアーキテクチャでは必ずしも当てはまらないと言えます。このような小さなタスクパラメータ空間の実用性を裏付ける証拠が、実証的な方法で提示されています。タスクパラメータ空間は、良好な挙動を示しており、新しいデータが入るたびにモデルを更新するためのワークフローを簡単にする、また、共有パラメータが凍結 |
2305.04660 | Rotational Slippage Prediction from Segmentation of Tactile Images | Adding tactile sensors to a robotic system is becoming a common practice to
achieve more complex manipulation skills than those robotics systems that only
use external cameras to manipulate objects. The key of tactile sensors is that
they provide extra information about the physical properties of the grasping.
In this paper, we implemented a system to predict and quantify the rotational
slippage of objects in hand using the vision-based tactile sensor known as
Digit. Our system comprises a neural network that obtains the segmented contact
region (object-sensor), to later calculate the slippage rotation angle from
this region using a thinning algorithm. Besides, we created our own tactile
segmentation dataset, which is the first one in the literature as far as we are
concerned, to train and evaluate our neural network, obtaining results of 95%
and 91% in Dice and IoU metrics. In real-scenario experiments, our system is
able to predict rotational slippage with a maximum mean rotational error of 3
degrees with previously unseen objects. Thus, our system can be used to prevent
an object from falling due to its slippage. | Julio Castaño-Amoros, Pablo Gil | 2023-05-08T12:23:47 | http://arxiv.org/abs/2305.04660v1 | # Rotational Slippage Prediction from Segmentation of Tactile Images
###### Abstract
Adding tactile sensors to a robotic system is becoming a common practice to achieve more complex manipulation skills than those robotics systems that only use external cameras to manipulate objects. The key of tactile sensors is that they provide extra information about the physical properties of the grasping. In this paper, we implemented a system to predict and quantify the rotational slippage of objects in hand using the vision-based tactile sensor known as Digit. Our system comprises a neural network that obtains the segmented contact region (object-sensor), to later calculate the slippage rotation angle from this region using a thinning algorithm. Besides, we created our own tactile segmentation dataset, which is the first one in the literature as far as we are concerned, to train and evaluate our neural network, obtaining results of 95% and 91% in Dice and IoU metrics. In real-scenario experiments, our system is able to predict rotational slippage with a maximum mean rotational error of 3 degrees with previously unseen objects. Thus, our system can be used to prevent an object from falling due to its slippage.
## I Introduction and Related Work
Traditionally, the methods to carry out robotic manipulation tasks used 2D or 3D vision sensors [1], which only take into account the geometric properties of the objects to perform the grasping. In contrast, with tactile sensors, it is possible to measure and react to physical properties (mass distribution, center of gravity or friction) in order to achieve a stable grasping [2].
In the last twenty years, several tactile sensors have been designed using different hardware technologies [3], although the last trend of tactile sensors lies in optical tactile sensors [4]. In this manuscript, we present an algorithm to estimate the rotation angle of an object which is being manipulated when slippage occurs. This method is based on segmentation neural networks to obtain the contact region (object-sensor) and traditional computer vision techniques to calculate the rotation angle and is applied to the vision-based tactile sensor Digit [5] which does not contain visual markers to keep low its cost.
Estimating the contact region between the robot's fingertips and the grasped object has been attempted to be solved in different ways. For example, by subtracting contact and no-contact tactile images [6], detecting and grouping visual markers [7], throughout 3D reconstruction and photometric algorithms [8], or using neural networks [9, 10]. In contrast, although our work is inspired by these previous articles, the main differences lie in the fact that we use the Digit sensors, without markers [7], which do not produce depth information [8], and state-of-the-art segmentation neural networks, which are more robust than subtracting operations [6] and vanilla CNN [9], and its training is more stable compared with GAN's training [10].
Slippage is a common physical event that occurs during object manipulation, that has been tried to solve for several years employing different approaches. For example, detecting binary slippage events with traditional image preprocessing techniques [11], combining convolutional and recurrent neural networks to classify slip in clockwise and counterclockwise rotation [12] or estimating the slip rotation angle using vision-based tactile sensors with markers [13] or force/torque sensors [14]. In this paper, we have inspired our work in these methods that characterize and quantify the rotational slip.
## II Method
We propose a two-stage method for touch region segmentation and rotational slippage prediction. The first stage of our method is based on a segmentation neural network applied to vision-based tactile sensing, which we called Tactile Segmentation Neural Network (TSNN). In this work, our goal is only to segment the contact region, then we decided to use DeepLabV3+ [15] architecture for experimentation. DeepLabV3+ is well-known for using an encoder-decoder architecture to perform image segmentation, and for introducing a new layer in its architecture, which is a combination of atrous or dilated and depth-wise separable convolutions. This combination leads to a reduction of computational complexity while maintaining similar or even better performance than previous versions. As the encoder, the authors used a modified version of the architecture Xception, called Aligned Xception, which replaces all the max pooling layers by the depth-wise separable convolutions to perform the feature extraction procedure. The decoder, in contrast, is a simpler part of the architecture, which only comprises convolution, concatenation, and upsampling layers to transform the intermediate features into the output.
The second stage of our method estimates the angle of rotation of the segmented region of contact using a traditional computer vision thinning algorithm (Skeleton method) [16] that blackens points in the binary contact region using an 8-square neighborhood and different connectivity conditions. Other approaches, based on different neural networks such
as Unet++ [17] and PSPNet [18] or different algorithms to estimate the angle such as PCA or ellipse fitting, were tested. The complete system is shown in Fig 1.
## III Experimentation and results
We have generated our own dataset as we have not found any dataset related to tactile segmentation in order to be used as the base of our experimentation. Our tactile segmentation dataset comprises 3675 tactile images with their respective labelled contact regions. We have used 16 objects from YCB dataset to record it, from which we have captured between 200 and 250 tactile images per object. The objects contain different textures, rigidity, weight, geometries, etc.
To train the TSNN we use the Dice and IoU metrics, an NVIDIA A100 Tensor Core GPU with 40 GB of RAM, and the following optimal hyperparameters: a batch size of 32, a learning rate of 1e-4, the Adam optimizer, the Focal loss, and 30 training epochs.
Table I shows the results obtained by DeepLabV3+ TSNN in the testing experiment. DeepLabV3+ is able to segment tactile images with high accuracy and in real-time execution. Besides, this TSNN is 3 ms faster than other segmentation neural networks (Unet++ and PSPNet) while maintaining the same performance, thus, achieving a better trade-off between segmentation accuracy and prediction time.
Figure 1(a) shows different examples of contact region segmentation carried out by DeepLabV3+ TSNN, and Fig. 1(b) shows our robotic manipulation setup with a UR5 robot, two DIGIT sensors, a ROBOTIQ gripper, the object to grasp with the aruco markers attached and an Intel RealSense camera to calculate the ground truth angle.
The task consists of grasping and lift an object while the tactile segmentation and rotational slippage angle are estimated. The predicted angle is calculated as the difference between the current and the initial angle obtained in the Skeleton method described earlier, while the ground truth angle is calculated using two aruco markers as visual references. Our system was evaluated with seven unseen objects (1 to 7 in Fig. 3) and two seen objects from our tactile segmentation dataset (8 and 9 in Fig. 3).
The experimentation comprises 45 graspings and lifts in total (five per object) while calculating the rotational error in degrees. Figure 3 shows the mean rotational error of the 5 graspings and lifts for each object. Note that object 6 and 8 causes more error and deviation because object 6 weight's is higher compared with the rest of the objects, and object 8 contains higher curvature on its surface that causes more saturation in the sensor. Our system is able to predict rotational slippage with an overall mean rotational error of **1.854deg**\(\pm\)**0.988deg**, that is to say, a maximum mean error of 3 degrees in the worst case. Figure 4 shows some examples of the prediction of rotational slippage with four aforementioned objects.
## IV Conclusions
In this paper, we propose a model-based system to predict rotational slippage during the grasping and lift of an object, achieving a mean error value of **1.854deg**\(\pm\)**0.988deg**, compared
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & **Dice** & **IoU** & **Time(s)** \\ \hline
**DeepLabV3+** & 0.956 \(\pm\) 0.013 & 0.914 \(\pm\) 0.023 & 0.006 \(\pm\) 0.002 \\ \hline
**PSPNet** & 0.951 \(\pm\) 0.014 & 0.907 \(\pm\) 0.025 & 0.006 \(\pm\) 0.002 \\ \hline \end{tabular}
\end{table} TABLE I: DeepLabV3+ TSNN results in terms of Dice, IoU and inference time metrics, and using the backbone ResNet18
Fig. 1: Diagram of our system combining both stages
Fig. 2: a) Examples of rotation angle calculation for slipping during lift task: DIGIT image (first row), ground truth (second row), prediction (third row), b) Robotic manipulation setup with different objects
with the error of **3.96\({}^{\circ}\)\(\pm\) UNK** from [13], and the error of **4.39\({}^{\circ}\)\(\pm\) 0.18\({}^{\circ}\)** from [14]. Although we could not carry out an experimental comparison because we do not have their sensors available, some objects were used both in this work and in theirs. Our system also has some limitations regarding the shape of the contact region. If this shape is similar to a circle, it becomes impossible to calculate its rotation movement. In that case, we propose to grasp the object by surfaces with small curvature.
| Tactileセンサーをロボットシステムに追加する事が、外部カメラのみで物体の操作を行うロボットシステムと比べて、より複雑な操作スキルを達成するために一般的な手法となっています。Tactileセンサーの重要な点は、その提供する情報は、掴む物体における物理的な性質に関する追加情報であることです。この論文では、手によって物を操作するための視覚に基づく感覚センサーであるDigitを用いて、物体への回転的な滑りに関する予測と定量化をシステムに実装しました。そのシステムは、セグメンテーションされた接触領域 (物体-センサー) を取得する神経ネットワークから構成され、この領域から回転的な滑り角度を計算するのに、薄膜アルゴリズムを使用します。さらに、私たちは、この論文で初めて提唱された独自の触覚セグメンテーションのデータセットを作成しました。このデータセットを用いて、神経ネットワークをトレーニングし、評価し、ディクスとIoUの指標においてそれぞれ95%と91%の |
2310.11965 | Filling in the Gaps: Efficient Event Coreference Resolution using Graph
Autoencoder Networks | We introduce a novel and efficient method for Event Coreference Resolution
(ECR) applied to a lower-resourced language domain. By framing ECR as a graph
reconstruction task, we are able to combine deep semantic embeddings with
structural coreference chain knowledge to create a parameter-efficient family
of Graph Autoencoder models (GAE). Our method significantly outperforms
classical mention-pair methods on a large Dutch event coreference corpus in
terms of overall score, efficiency and training speed. Additionally, we show
that our models are consistently able to classify more difficult coreference
links and are far more robust in low-data settings when compared to
transformer-based mention-pair coreference algorithms. | Loic De Langhe, Orphée De Clercq, Veronique Hoste | 2023-10-18T13:44:58 | http://arxiv.org/abs/2310.11965v1 | # Filling in the Gaps:
###### Abstract
We introduce a novel and efficient method for Event Coreference Resolution (ECR) applied to a lower-resourced language domain. By framing ECR as a graph reconstruction task, we are able to combine deep semantic embeddings with structural coreference chain knowledge to create a parameter-efficient family of Graph Autoencoder models (GAE). Our method significantly outperforms classical mention-pair methods on a large Dutch event coreference corpus in terms of overall score, efficiency and training speed. Additionally, we show that our models are consistently able to classify more difficult coreference links and are far more robust in low-data settings when compared to transformer-based mention-pair coreference algorithms.
## 1 Introduction
Event coreference resolution (ECR) is a discourse-centered NLP task in which the goal is to determine whether or not two textual events refer to the same real-life or fictional event. While this is a fairly easy task for human readers, it is far more complicated for AI algorithms, which often do not have access to the extra-linguistic knowledge or discourse structure overview that is required to successfully connect these events. Nonetheless ECR, especially when considering cross-documents settings, holds interesting potential for a large variety of practical NLP applications such as summarization Liu and Lapata (2019), information extraction Humphreys et al. (1997) and content-based news recommendation Vermeulen (2018).
However, despite the many potential avenues for ECR, the task remains highly understudied for comparatively lower-resourced languages. Furthermore, in spite of significant strides made since the advent of transformer-based coreference systems, a growing number of studies has questioned the effectiveness of such models. It has been suggested that classification decisions are still primarily based on the surface-level lexical similarity between the textual spans of event mentions Ahmed et al. (2023); De Langhe et al. (2023), while this is far from the only aspect that should be considered in the classification decision. Concretely, in many models coreferential links are assigned between similar mentions even when they are not coreferent, leading to a significant number of false positive classifications, such as between Examples 1 and 2.
1. The French president Macron met with the American president for the first time today
2. French President Sarkozy met the American president
We believe that the fundamental problem with this method stems from the fact that in most cases events are only compared in a pairwise manner and not as part of a larger coreference chain. The evidence that transformer-based coreference resolution is primarily based on superficial similarity leads us to believe that the current pairwise classification paradigm for transformer-based event coreference is highly inefficient, especially for studies in lower-resourced languages where the state of the art still often relies on the costly process of fine-tuning large monolingual BERT-like models De Langhe et al. (2022).
In this paper we aim to both address the lack of studies in comparatively lower-resourced languages, as well as the more fundamental concerns w.r.t. the task outlined above. We frame ECR as a graph reconstruction task and introduce a family of graph autoencoder models which consistently outperforms the traditional transformer-based methods on a large Dutch ECR corpus, both in terms of accuracy and efficiency. Additionally, we introduce a language-agnostic model variant which disregards the use of semantic features entirely and even outperforms transformer-based classification in some
situations. Quantitative analysis reveals that the lightweight autoencoder models can consistently classify more difficult mentions (cfr. Examples 1 and 2) and are far more robust in low-data settings compared to traditional mention-pair algorithms.
## 2 Related Work
### Event Coreference Resolution
The primary paradigm for event coreference resolution takes the form of a binary mention-pair approach. This method generates all possible event pairs and reduces the classification to a binary decision (coreferent or not) between each event pair. A large variety of classical machine learning algorithms has been tested using the mention-pair paradigm such as decision trees Cybulska and Vossen (2015), support vector machines Chen et al. (2015) and standard deep neural networks Nguyen et al. (2016).
More recent work has focused on the use of LLMs and transformer encoders Cattan et al. (2021), with span-based architectures attaining the best overall results Joshi et al. (2020); Lu and Ng (2021). It has to be noted that mention-pair approaches relying on LLMs suffer most from the limitations discussed in Section 1. In an effort to mitigate these issues some studies have sought to move away from the pairwise computation of coreference by modelling coreference chains as graphs instead. These methods' primary goal is to create a structurally-informed representation of the coreference chains by integrating the overall document Fan et al. (2022); Tran et al. (2021) or discourse Huang et al. (2022) structure. Other graph-based methods have focused on commonsense reasoning Wu et al. (2022).
Research for comparatively lower-resourced languages has generally followed the paradigms and methods described above and has focused on languages such as Chinese Mitamura et al. (2015), Arabic NIST (2005) and Dutch Minard et al. (2016).
### Graph Autoencoders
Graph Autoencoder models were introduced by Kipf and Welling (2016) as an efficient method for graph reconstruction tasks. The original paper introduces both variational graph autoencoders (VGAE) and non-probabilistic graph autoencoders (GAE) networks. The models are parameterized by a 2-layer graph-convolutional network (GCN) Kipf and Welling (2016) encoder and a generative inner-product decoder between the latent variables. While initially conceived as lightweight models for citation network prediction tasks, both the VGAE and GAE have been successfully applied to a wide variety of applications such as molecule design Liu et al. (2018), social network relational learning Yang et al. (2020) and 3D scene generation Chattopadhyay et al. (2023). Despite their apparent potential for effectively processing large amounts of graph-structured data, application within the field of NLP has been limited to a number of studies in unsupervised relational learning Li et al. (2020).
## 3 Experiments
### Data
Our data consists of the Dutch ENCORE corpus De Langhe et al. (2022), which in its totality consists of 12,875 annotated events spread over 1,015 documents that were sourced from a collection of Dutch (Flemish) newspaper articles. Coreferential relations between events were annotated at the within-document and cross-document level.
### Experimental Setup
#### 3.2.1 Baseline Coreference Model
Our baseline model consists of the Dutch monolingual BERTje model de Vries et al. (2019) fine-tuned for cross-document ECR. First, each possible event pair in the data is encoded by concatenating the two events and by subsequently feeding these to the BERTje encoder. We use the token representation of the classification token _[CLS]_ as the aggregate embedding of each event pair, which is subsequently passed to a softmax-activated classification function. Finally, the results of the text pair classification are passed through a standard agglomerative clustering algorithm Kenyon-Dean et al. (2018); Barhom et al. (2019) in order to obtain output in the form of coreference chains.
We also train two parameter-efficient versions of this baseline model using the distilled Dutch Language model RobBERTje Delobelle et al. (2022) and a standard BERTje model trained with bottleneck adapters Pfeiffer et al. (2020).
#### 3.2.2 Graph Autoencoder Model
We make the assumption that a coreference chain can be represented by an undirected, unweighted graph \(\mathcal{G}=(V,E)\) with \(|V|\) nodes, where each node represents an event and each edge \(e\in E\) between
two nodes denotes a coreferential link between those events. We frame ECR as a graph reconstruction task where a partially masked adjacency matrix \(A\) and a node-feature matrix \(X\) are used to predict all original edges in the graph. We employ both the VGAE and GAE models discussed in Section 2.2. In a non-probabilistic setting (GAE) the coreference graph is obtained by passing the adjacency matrix \(A\) and node-feature matrix \(X\) through a Graph Convolutional Neural Network (GCN) encoder and then computing the reconstructed matrix \(\hat{A}\) from the latent embeddings \(Z\):
\[Z=GCN(X,A) \tag{1}\]
\[\hat{A}=\sigma(ZZ^{\tau}) \tag{2}\]
For a detailed overview of the (probabilistic) variational graph autoencoder we refer the reader to the original paper by Kipf and Welling (2016).
Our experiments are performed in a cross-document setting, meaning that the input adjacency matrix \(A\) contains all events in the ENCORE dataset. Following the original approach by Kipf and Welling (2016) we mask 15% of the edges, 5% to be used for validation and the remaining 10% for testing. An equal amount of non-edges is randomly sampled from \(A\) to balance the validation and test data.
We extract masked edges and non-edges and use them to build the training, validation and test sets for the mention-pair baseline models detailed above, ensuring that both the mention-pair and graph autoencoder models have access to exactly the same data for training, validation and testing. We define the encoder network with a 64-dimension hidden layer and 32-dimension latent variables. For all experiments we train for a total duration of 200 epochs using an Adam optimizer (Kingma and Ba, 2014) and a learning rate of 0.001.
We construct node features through Dutch monolingual transformer models by average-pooling token representations for each token in the event span in the models' final hidden layer, resulting in a 768-dimensional feature vector for each node in the graph. For this we use the Dutch BERTje model (de Vries et al., 2019), a Dutch sentence-BERT model (Reimers and Gurevych, 2019) and the Dutch RoBERTa-based RobBERT model (Delobelle et al., 2020). Additionally, we create a second feature set for the BERTje and RobBERT models where each event is represented by the concatenation of the last 4 layers' average-pooled token representations Devlin et al. (2018). This in turn results in a 3072-dimensional feature vector.
Finally, we also evaluate a language-agnostic featureless model where \(X\) is represented by the identity matrix of \(A\).
#### 3.2.3 Hardware Specifications
The baseline coreference algorithms were trained and evaluated on 2 Tesla V100-SXM2-16GB GPUs. Due to GPU memory constraints, the Graph encoder models were all trained and evaluated on a single 2.6 GHz 6-Core Intel Core i7 CPU.
## 4 Results and Discussion
Results from our experiments are disclosed in Table 1. Results are reported through the CONLL F1 metric, an average of 3 commonly used metrics for coreference evaluation: MUC (Vilain et al., 1995),
\begin{table}
\begin{tabular}{l c c c c c}
**Model** & **CONLL F1** & **Training Runtime (s)** & **Inference Runtime (s)** & **Trainable Parameters** & **Disk Space (MB)** \\ \hline MP RobBERTje & 0.767 & 7962 & 16.31 & 74M & 297 \\ MP BERTje\({}_{\text{e}ADPT}\) & 0.780 & 12 206 & 20.61 & 0.9M & 3.5 \\ MP BERTje & 0.799 & 9737 & 21.78 & 110M & 426 \\ \hline GAE NoFeatures & 0.832 \(\pm\) 0.008 & 1006 & 0.134 & 825856 & 3.2 \\ GAE BERTje\({}_{\text{708}}\) & 0.835 \(\pm\) 0.010 & 975 & 0.263 & 51200 & 0.204 \\ GAE BERTje\({}_{\text{9072}}\) & **0.852 \(\pm\) 0.006** & 1055 & 0.294 & 198656 & 0.780 \\ GAE RobBERT\({}_{\text{708}}\) & 0.838 \(\pm\) 0.004 & 1006 & 0.273 & 51200 & 0.204 \\ GAE RobBERT\({}_{\text{3072}}\) & 0.841 \(\pm\) 0.007 & 1204 & 0.292 & 198656 & 0.780 \\ GAE SBERT & 0.801 \(\pm\) 0.002 & 982 & 0.291 & 51200 & 0.204 \\ \hline VGAE NoFeatures & 0.824 \(\pm\) 0.009 & 1053 & 0.139 & 827904 & 3.2 \\ VGAE BERTje\({}_{\text{768}}\) & 0.822 \(\pm\) 0.011 & 1233 & 0.282 & 53248 & 0.212 \\ VGAE BERTje\({}_{\text{3072}}\) & 0.842 \(\pm\) 0.009 & 1146 & 0.324 & 200704 & 0.788 \\ VGAE RobBERT\({}_{\text{768}}\) & 0.828 \(\pm\) 0.0021 & 1141 & 0.288 & 53248 & 0.212 \\ VGAE RobBERT\({}_{\text{3072}}\) & 0.831 \(\pm\) 0.004 & 1209 & 0.301 & 200704 & 0.788 \\ VGAE SBERT & 0.773 \(\pm\) 0.012 & 1185 & 0.295 & 53248 & 0.212 \\ \end{tabular}
\end{table}
Table 1: Results for the cross-document event coreference task. We report the average CONLL score and standard deviation over 3 training runs with different random seed initialization for the GCN weight matrices (GAE/VAE) and classification heads (Mention-Pair models). Inference runtime is reported for the entire test set.
B\({}^{3}\)[11] and CEAF [12]. We find that the graph autoencoder models consistently outperform the traditional mention-pair approach. Moreover, we find the autoencoder approach significantly reduces model size, training time and inference speed even when compared to parameter-efficient transformer-based methods. We note that the VGAE models perform slightly worse compared to their non-probabilistic counterparts, which is contrary to the findings in Kipf and Welling (2016). This can be explained by the use of more complex acyclic graph data in the original paper. In this more uncertain context, probabilistic models would likely perform better.
As a means of quantitative error analysis, we report the average Levenshtein distance between two event spans for the True Positive (TP) pairs in our test set in Figure 1. Logically, if graph-based models are able to better classify harder (i.e non-similar) edges, the average Levenstein distance for predicted TP edges should be higher than for the mention-pair models. For readability's sake we only include results for the best performing GAE-class models. A more detailed table can be found in the Appendix. We find that the average distance between TP pairs increases for our introduced graph models, indicating that graph-based models can, to some extent, mitigate the pitfalls of mention-pair methodologies as discussed in Section 1.
## 5 Ablation Studies
We gauge the robustness of the graph-based models in low-data settings by re-running the original experiment and continually reducing the available training data by increments of 10%. Figure 2 shows the CONLL F1 score for each of the models with respect to the available training data size. Also here, only the best-performing GAE-class models are visualized and an overview of all models' performance can be found in the Appendix. Surprisingly, we find that training the model on as little as 5% of the total amount of edges in the dataset can already lead to satisfactory results. Logically, feature-less models suffer from a significant drop in performance when available training data is reduced. We also find that the overall drop in performance is far greater for the traditional mention-pair model than it is for the feature-based GAE-class models in low-data settings. Overall, we conclude that the introduced family of models can be a lightweight and stable alternative to traditional mention-pair coreference models, even in settings with little to no available training data.
## 6 Conclusion
We show that ECR through graph autoencoders significantly outperforms traditional mention-pair approaches in terms of performance, speed and model size in settings where coreference chains are at least partially known. Our method provides a fast and lightweight approach for processing large cross-document collections of event data. Additionally, our analysis shows that combining BERT-like embeddings and structural knowledge of coreference chains mitigates the issues in mention-pair classification w.r.t the dependence on surface-form lexical similarity. Our ablation experiments reveal that only a very small number of training edges is needed to obtain satisfactory performance.
Future work will explore the possibility of combining mention-pair models with the proposed graph autoencoder approach in a pipeline setting in order to make it possible to employ graph reconstruction models in settings where initially all edges in the graph are unknown. Additionally, we aim to perform more fine-grained analyses, both quantitative and qualitative, regarding the type of errors made by graph-based coreference models.
Figure 1: Average Levenshtein distance for True Positive (TP) classifications across all models
Figure 2: CONLL F1 performance with respect to the available training data.
## 7 Limitations
We identify two possible limitations with the work presented above. First, by framing coreference resolution as a graph reconstruction task we assume that at least some coreference links in the cross-document graph are available to train on. However, we note that this issue can in part be mitigated by a simple exact match heuristic for event spans on unlabeled data. Moreover, in most application settings it is not inconceivable that at least a partial graph is available.
A second limitation stems from the fact that we modelled coreference chains as undirected graphs. It could be argued that some coreferential relationships such as pronominal anaphora could be more accurately modelled using directed graphs instead.
| ```
新しいおよび効率的なイベントコアフォレンサ解決(ECR)手法を、低資源言語の領域に適用する。ECRをグラフ再構築タスクとして捉え、深層的な意味の埋め込みと構造的な関連性連鎖知識を組み合わせることで、パラメータ効率の高いグラフオートエンコーダーモデル(GAE)を作成する。その方法では、大規模のオランダ語イベントコアフォレンサcorpusにおいて、従来のメンションペア手法と対比して、総合的なスコア、効率性、トレーニングスピードが著しく優れている。さらに、そのモデルは、より困難なコアフォレンセリンクを分類しやすく、データが少ない状況においても、トランスフォーマーベースのメンションペアコアフォレンサアルゴリズムと比べてより堅牢であることが示されている。
``` |
2304.04549 | Towards a Blockchain-based Software Engineering Education | Blockchain technologies for rewards in education are gaining attraction as a
promising approach to motivate student learning and promote academic
achievement. By providing tangible rewards for educational attainment and
engagement, such as digital tokens, educators can motivate learners to take a
more active role in their learning and increase their sense of ownership and
responsibility for their academic outcomes. In this context, this work proposes
the Software Engineering Skill (SES) token as a way of rewarding students in
order to improve their experiences in Software Engineering Education (SEE). We
performed a proof of concept and conclude that SES token can be deployed in a
platform to support SEE. | Filipe Fernandes, Cláudia Werner | 2023-04-04T19:20:28 | http://arxiv.org/abs/2304.04549v1 | # Towards a Blockchain-based Software Engineering Education
###### Abstract
Blockchain technologies for rewards in education are gaining attraction as a promising approach to motivate student learning and promote academic achievement. By providing tangible rewards for educational attainment and engagement, such as digital tokens, educators can motivate learners to take a more active role in their learning and increase their sense of ownership and responsibility for their academic outcomes. In this context, this work proposes the Software Engineering Skill (SES) token as a way of rewarding students in order to improve their experiences in Software Engineering Education (SEE). We performed a proof of concept and conclude that SES token can be deployed in a platform to support SEE.
## 1 Introduction
Blockchain technology in the field of education is currently in its nascent stage, with only a limited number of educational institutions adopting this technology. This technology could disrupt the traditional role of educational institutions as certification agents, and offer students greater access to learning opportunities [20]. According to [17], blockchain technology in education can provide equal opportunities for students to develop themselves through the implementation of a token economy in different classroom settings. In educational contexts, the adoption of a token economy is perceived as an essential mechanism for sustaining interest and engagement in the educational process.
For this reason, this work proposes the Software Engineering Skill (SES) token as an approach to support blockchain-based Software Engineering Education (SEE). SES token aims to engage and motivate students and developers in order to improve experiences in SEE, as well as increase developer contributions on the educational platform providing SES tokens.
This paper is organized as follows: Section 2 presents the research context of which this work is part. Our proposal is described in Section 3 and a proof of concept is presented in Section 4. Finally, our conclusions and future directions are described in Section 5.
## 2 Research Context
This work is an extension of a doctoral thesis in progress that proposes an approach to enable the Metaverse to support both XR applications (XR apps) development based on software reuse techniques and mechanisms to improve learning outcomes in Software Engineering (SE), allowing educators and students to have immersive experiences and increasing adoption of Immersive Learning (iL) in SEE [12]. Metaverse-based Software Engineering Education (MetaSEE) approach aims to provide an interoperable and scalable structure composed of the Metaverse's main concepts and technologies grouped in five layers: Physical, Virtual, Metaverse Engine, MetaSEE, and Infrastructure.
_Physical Layer_ corresponds to the main entities external to the Metaverse that belong to the physical and real world. _Virtual Layer_ establishes the main components of the "virtualization" of physical layer elements. _Metaverse Engine Layer_ is composed of general _Technologies_, as well as _Economics_ and _Security_ of the Metaverse. _MetaSEE Layer_ is the main contribution to support SEE through the Metaverse. _Development Tools_ component should provide a set of mechanisms to facilitate the development of XR apps for SEE considering the range of complexity and characteristics involved. _Integration Tools_ component should provide reusable functionality for SEE. _Learning Analytics (LA)_ is the component that must guarantee the maintenance of the learning performance of the Metaverse for SEE users. _Infrastructure Layer_ deals with network and decentralization aspects of the Metaverse.
As one of the results, a platform has been developed in order to validate and evaluate the proposal. In general, MetaSEE platform is composed of a web application and XR apps. From the web application, students can search and access Virtual Worlds (VWs) created by educators and other students. These VWs are mainly composed of SE-specific features, such as Unified Modeling Language (UML) diagrams, code editing, importing repositories, management tools, etc. There is an architecture to support developers' contributions to add SE-specific features (extensions) to the platform.
In order to contribute to the MetaSEE approach, more specifically in the _Decentralized Storage_ component of _Infrastructure Layer_, this work proposes to use blockchain technology to create a tokenization mechanism, which will be detailed in Section 3.
## 3 Blockchain-based Software Engineering Education
The main goal of this work is to propose a solution that meets the _Decentralized Storage_ component in order to contribute to the MetaSEE approach. This work designs a mechanism through tokenization with the purpose of fostering a digital reward in order to improve student engagement, as well as motivating developers to contribute extensions (plugins) to the MetaSEE approach.
Therefore, we defined the SES as a MetaSEE token to engage and motivate students and developers. Considering the MetaSEE approach context, SES token can provide a mechanism based on "learn to earn" for students [14].
The use of SES token can provide tangible incentives for students, which can be used as rewards for achieving learning goals or performing other activities within the platform. In addition, creating a reward system can help making the learning process
more playful and engaging, encouraging students to dedicate themselves more to their studies.
SES token can also help make the platform more interactive and collaborative. For example, students can be rewarded for helping other students solve problems or answer questions. This can create a more engaged and collaborative community of students, which can be an additional source of motivation for learning. Additionally, SES token can be used as a way to record and recognize student achievements. For example, successful completion of an activity can result in a token being issued, which can be used to unlock additional content or access special features. This can help create a sense of accomplishment and progress for students, which can motivate them to keep learning and progressing.
Another way and use of the SES token is in the gamification of the learning experience, making the MetaSEE platform more attractive and engaging. By setting clear goals and rewards for students, SES token can help create a more exciting and challenging learning environment that can keep students motivated and engaged longer.
For developers, SES token can be an effective mechanism to motivate developers to contribute to the MetaSEE platform. Developers can receive tokens as a reward for contributing new extensions and bug fixes. These rewards can provide a form of recognition for work done and can be used to encourage a culture of collaboration in the developer community. This can result in increased participation as well as increased SE-specific features through extensions.
## 4 Proof of Concept
In this section is described the SES token Proof of Concept (PoC). Our implementation was developed with JavaScript (JS) and Solidity1 languages, as well as Hardhat2 framework. Solidity is a smart contract programming language used on the Ethereum platform and Hardhat is a software development environment for Ethereum that allows testing, compiling, and deploying smart contracts. Hardhat is a useful tool for developers working with Solidity as it simplifies and automates many common smart contract development tasks.
Footnote 1: [https://soliditylang.org/](https://soliditylang.org/)
Footnote 2: [https://hardhat.org/](https://hardhat.org/)
In order to prove a SES token basic transaction between accounts, we implemented a smart contract written in Solidity that implements a custom ERC20 token called _SESkillToken_, as shown in Figure 1. ERC20 defines a set of interfaces and standards for implementing digital tokens on the blockchain. This means that _SESkillToken_, being a contract that implements the ERC20 standard, follows ERC20 interfaces and standards. This ensures that _SESkillToken_ is compatible with other tokens and applications that also implement the ERC20 standard. Lines 5 to 7 import contract inherits functionality from other OpenZeppelin3 (open-source repository with standard contracts) contracts such as _ERC20_, _ERC20Capped_ and _ERC20Burnable_, which means that the _SESkillToken_ has the same functionality as those contracts, in addition to the custom functionality defined in this contract.
2
3pragma solidity *@.8.18;
4
5import "openzepolin/contracts/token/ERC20/ERC2.sol";
6import "openzepolin/contracts/token/ERC20/extensions/ERC20Kopped.sol";
7import "openzepolin/contracts/token/ERC20/extensions/ERC20Burnable.sol";
8
9contract SKSKillToken is CRC20Kopped {
10address payable public owner;
11uint25 public blockboard;
12
13constructor(uint256 cap, uint256 reward) #RC20("SESkillToken", "SES") ERC20Kapped(cap " (10 ** decimals())) {
14onner = payable(msg.sender);
15_init(owner, 7000000* (10 ** decimals()));
16blockboard = reward * (10 ** decimals());
17}
18
19function_initMinerReward() internal {
20_init(block.coinbase, blockboard);
21}
22
23function_beforeTokenTransfer(address from, address to, uint256 value) internal virtual override {
24if(from 1= address(0) && to 1= block.coinbase && block.coinbase 1= address(0)) {
25_initMinerReward();
26}
27super_beforeTokenTransfer(from, to, value);
28}
29
30functionestlockboard(uint256 reward) public onlyOnner {
31blockboard = reward * (10 ** decimals());
32}
33
34functiondestroy() public onlyOnner {
35self
of tokens transfer between accounts.
After all the unit tests passed successfully, we performed a basic integration test with the MetaMask4 wallet. MetaMask is a browser extension that allows users to access Decentralized Applications (dApps) based on blockchain, such as Ethereum, directly from their web browsers. It is a digital wallet that enables users to store, send, and receive cryptocurrencies, as well as interact with decentralized applications [22]. Figure 3 presents the transfers between test accounts.
Footnote 4: [https://metamask.io/](https://metamask.io/)
Firstly, we added the Sepolia5 testnet in the wallet and integrated it with the SES token. A testnet is a network in the blockchain ecosystem that allows developers and users to test and experiment with blockchain applications and smart contracts in a simulated environment that closely resembles the real blockchain network. From the SES local token project on the desktop, 70,000,000 tokens were sent to test account 1, as shown in Figure 3 (a). According to Figure 3 (b), there is a transfer confirmation between accounts of 1,000,000 tokens. After the confirmation, these tokens are transferred to test account 2, as shown in Figure 3 (c).
Footnote 5: [https://sepolia.dev/](https://sepolia.dev/)
In order to prove these operations, these transactions were searched on Etherscan6. It is a popular blockchain explorer for the Ethereum network that allows users to view and search for transactions, addresses, and other activities on the blockchain. According to Figure 4, transfer transactions were performed successfully.
Figure 3: SES token transfer between test accounts on MetaMask wallet
Figure 2: SES token unit test
## 5 Conclusion
This work described a token to support blockchain-based SEE, named SES. The implementation of SES token-based rewards in education presents a range of advantages for learners and educators. SES token provides a tangible incentive for learners to engage in educational activities and achieve learning outcomes and can create a more engaged and collaborative community of students, which can be an additional source of motivation for learning.
Although the implementation of token-based rewards in education presents several advantages, it also entails several disadvantages that should be considered. One of the primary concerns is that students may become more focused on earning tokens rather than on the learning process itself, which can lead to a decrease in the quality and depth of learning. Another concern is the potential for token-based rewards to reinforce existing power structures and inequalities within the education system. For instance, learners who have greater access to resources or who are already performing well may be more likely to earn tokens, while those who are disadvantaged or struggling may be left behind.
As future works, we intend to perform research to solve token-based rewards concerns, the evolution of the SES token implementation in order to integrate with the MetaSEE platform, as well as evaluation with students, educators, and developers.
| ```
教育における報酬のためのブロックチェーン技術が、生徒の学習を促し、学業の成果を促進するための可能性のあるアプローチとして注目を集めている。教育における成果とエンゲージメントを TANGIBLE な報酬として提供する(例:デジタルトークン)、教育者は学習者に対してより積極的に学び、学習における彼らの所有権と責任感を高めることができる。この状況において、この論文はソフトウェアエンジニアリングスキル(SES)トークンを、ソフトウェアエンジニアリング教育(SEE)における学生の報酬として提案する。私たちは、SESトークンの実証的な概念を検証し、SESトークンをSEEを支援するプラットフォームにデプロイできることを結論付けている。
``` |
2308.15791 | Neural Video Compression with Temporal Layer-Adaptive Hierarchical
B-frame Coding | Neural video compression (NVC) is a rapidly evolving video coding research
area, with some models achieving superior coding efficiency compared to the
latest video coding standard Versatile Video Coding (VVC). In conventional
video coding standards, the hierarchical B-frame coding, which utilizes a
bidirectional prediction structure for higher compression, had been
well-studied and exploited. In NVC, however, limited research has investigated
the hierarchical B scheme. In this paper, we propose an NVC model exploiting
hierarchical B-frame coding with temporal layer-adaptive optimization. We first
extend an existing unidirectional NVC model to a bidirectional model, which
achieves -21.13% BD-rate gain over the unidirectional baseline model. However,
this model faces challenges when applied to sequences with complex or large
motions, leading to performance degradation. To address this, we introduce
temporal layer-adaptive optimization, incorporating methods such as temporal
layer-adaptive quality scaling (TAQS) and temporal layer-adaptive latent
scaling (TALS). The final model with the proposed methods achieves an
impressive BD-rate gain of -39.86% against the baseline. It also resolves the
challenges in sequences with large or complex motions with up to -49.13% more
BD-rate gains than the simple bidirectional extension. This improvement is
attributed to the allocation of more bits to lower temporal layers, thereby
enhancing overall reconstruction quality with smaller bits. Since our method
has little dependency on a specific NVC model architecture, it can serve as a
general tool for extending unidirectional NVC models to the ones with
hierarchical B-frame coding. | Yeongwoong Kim, Suyong Bahk, Seungeon Kim, Won Hee Lee, Dokwan Oh, Hui Yong Kim | 2023-08-30T06:49:34 | http://arxiv.org/abs/2308.15791v3 | # Neural Video Compression with Temporal Layer-Adaptive Hierarchical B-frame Coding
###### Abstract
Neural video compression (NVC) is a rapidly evolving video coding research area, with some models achieving superior coding efficiency compared to the latest video coding standard Versatile Video Coding (VVC). In conventional video coding standards, the hierarchical B-frame coding, which utilizes a bidirectional prediction structure for higher compression, had been well-studied and exploited. In NVC, however, limited research has investigated the hierarchical B scheme. In this paper, we propose an NVC model exploiting hierarchical B-frame coding with temporal layer-adaptive optimization. We first extend an existing unidirectional NVC model to a bidirectional model, which achieves -21.13% BD-rate gain over the unidirectional baseline model. However, this model faces challenges when applied to sequences with complex or large motions, leading to performance degradation. To address this, we introduce temporal layer-adaptive optimization, incorporating methods such as temporal layer-adaptive quality scaling (TAQS) and temporal layer-adaptive latent scaling (TALS). The final model with the proposed methods achieves an impressive BD-rate gain of -39.86% against the baseline. It also resolves the challenges in sequences with large or complex motions with up to -49.13% more BD-rate gains than the simple bidirectional extension. This improvement is attributed to the allocation of more bits to lower temporal layers, thereby enhancing overall reconstruction quality with smaller bits. Since our method has little dependency on a specific NVC model architecture, it can serve as a general tool for extending unidirectional NVC models to the ones with hierarchical B-frame coding.
## 1 Introduction
Neural video compression (NVC) is a rapidly evolving field within video coding research, showcasing its potential to surpass the coding efficiency of established video coding standards such as High Efficiency Video Coding (HEVC) [2] and Versatile Video Coding (VVC) [1, 13, 14]. However, while traditional video coding standards have extensively studied hierarchical B-frame coding, which employs bidirectional prediction structures to achieve higher compression, the exploration of this scheme in NVC has been limited [1]. In this paper, we propose an NVC model that leverages hierarchical B-frame coding and introduces temporal layer-adaptive optimization to enhance the compression efficiency of bidirectional NVC models.
We first present a bidirectional extension of an existing unidirectional NVC model, deep contextual video compression (DCVC) [15], which we term bidirectional DCVC (Bi-DCVC). This model includes bidirectional motion estimation, bidirectional motion coding, and bidirectional context generation and achieves -21.13% BD-rate gain against DCVC. Despite achieving significant improvements over unidirectional baselines, Bi-DCVC shows significant performance degradation when applied to sequences containing complex or large motions. Based on the analysis results, we introduce a temporal layer-adaptive optimization strategy for bidirectional NVC models. More specifically, temporal layer-adaptive quality scaling (TAQS) and temporal layer-adaptive latent scaling (TALS) are proposed. The first one is a training strategy that employs distinct loss functions for different temporal layers, while the second is a method that scales latent features using temporal layer-wise scaling vectors. This allows for different operations for the temporal layers in a single model with small additional components.
We designate a model that integrates these two methods atop Bi-DCVC as hierarchical DCVC (Hi-DCVC). Hi
Figure 1: Illustration of the prediction structures in low-delay P/B and random access configurations.
DCVC demonstrates improved coding efficiency compared to Bi-DCVC with a -39.86% BD-rate gain against DCVC. In addition, for sequences with large or complex motions, Hi-DCVC shows up to -49.13% more BD-rate reduction than Bi-DCVC. These improvements are attributed to the integration of temporal layer-adaptive optimization, which allocated more bits to lower temporal layers, resulting in improved reconstruction quality with fewer bits. Overall, Hi-DCVC's utilization of this optimization technique yielded impressive coding performance enhancements.
The main contributions of this paper include identifying challenges in extending unidirectional NVC models to bidirectional prediction structures and proposing novel strategies for temporal layer-adaptive optimization. The experimental results validate the effectiveness of our methods, showcasing significant improvements in coding efficiency and offering new avenues for enhancing bidirectional NVC models.
## 2 Related Works
### Hierarchical B-frame Coding
There are various picture types, such as I-frame, P-frame, and B-frame, based on their prediction structure in traditional video coding. I-frame is a picture type that uses intra-coding without referencing other frames. P-frame and B-frame reference decoded frames to utilize temporal redundancy and can achieve higher coding efficiency than I-frame. P-frame uses only one reference frame, while B-frames can use two or more reference frames, as shown in Fig. 1.
While such inter-prediction structures yield high coding efficiency, they have coding delays contingent upon the referencing structures. Hence, video coding standards, such as HEVC [10] and VVC [1], provide various configurations tailored to different applications [1]. For instance, the Low-Delay P/B (LDP/LDB) configuration aligns frames' output order and decoding order, allowing reference to only preceding frames, as shown in Fig. 1-(a). These configurations are mainly used in real-time broadcasting and video conferencing applications. On the other hand, the Random Access (RA) configuration, incorporating hierarchical B-frame coding, enables referencing both preceding and subsequent frames based on the output order, attaining the highest coding efficiency. However, this configuration also entails the highest coding delays where it is more suitable for streaming services such as YouTube and Netflix.
Within the hierarchical B-frame coding scheme depicted in Figure 1-(b), the number of temporal layers (\(l=0,1,2,3\)) is determined based on referencing structures. Although the illustration showcases a dyadic prediction structure with an intra-period of 8, it can be expanded to accommodate any intra-period that is a power of 2 with more temporal layers. The concept of temporal layers not only offers various functionalities, including scalable coding but also enhances coding efficiency. For example, frames in the lowest temporal layers are referenced most by other frames, making it advantageous to allocate more bits to the layer. In other words, by adjusting the bit allocation differently for different temporal layers, a substantial improvement in overall coding efficiency can be achieved.
### Neural Video Compression
Unidirectional NVC ModelsThe success of neural image compression has paved the way for neural video compression, starting with the introduction of deep video compression (DVC) framework [12] which uses an unidirectional prediction structure. In the DVC framework, all the components of traditional video coding, such as motion estimation, motion compensation, and residual compression, are replaced with neural network modules, allowing for end-to-end training. Subsequent works improved the framework by introducing reference smoothing with scale-space flow [1], recurrent use of multiple reference frames [23], and more lightweight and enhanced network architectures [15].
Furthermore, research has also delved into exploring new frameworks that harness feature domains with richer information than the image domain [12, 13, 14, 15, 16], For instance, FVC [12] performs motion estimation and compensation from shallow features, utilizing offsets of deformable convolutions [15] as motion cues in the decoder, rather than bilinear warping based on optical flows. In addition, DCVC [13, 14] proposed a contextual coding approach wherein shallow features extracted from reference frames are utilized to generate temporal with motion compensation. This context is then used as additional input to the encoder, decoder, and entropy model, introducing contextual coding that deviates from the residual coding scheme transmitting only the difference between predicted and current frames. Recently, NVC models using transformers [17] or masked image modeling [10] have also emerged. Surprisingly, DCVC-HEM [13] and DCVC-DC [13, 14] building upon DCVC [13] have achieved higher compression performance compared to VVC [1], through additional research on temporal context and enhancements to the entropy model.
Bidirectional NVC ModelsWhile the majority of models in neural video compression employ unidirectional prediction structures, some have explored the benefits of bidirectional prediction [23, 15, 16, 17], which is the main focus of this paper. For example, HLVC [23] divides a group of pictures (GOP) into three different layers and uses different coding methods and quality levels for the layers to exploit the advantages of bidirectional prediction. B-EPIC [14] extends a unidirectional NVC model to a bidirectional one by a frame interpolation method, achieving high compression efficiency. LHBDC [17] proposed novel bidirectional tools such as temporal bi-directional prediction of motion vectors and learned bi-directional motion compensation mask model for bidirectional neural video coding.
However, despite the advancements enabled by these bidi
rectional NVC models, certain limitations still persist. For instance, while HLVC utilizes different coding methods based on temporal layers, it is mainly focused on only two temporal layers, thus not providing a generic approach for temporal layer-adaptive optimization. Additionally, this approach requires storing multiple models, leading to memory overhead and loading latency. In addition, while B-EPIC presents a simple and effective method for extending unidirectional models to bidirectional ones, it lacks a study for temporal layers. As a result, it outperforms the unidirectional model with only around -22% of BD-rate gains. LHBDC also neglects the temporal layer aspect. Motivated by these limitations, we conduct temporal layer analysis for bidirectional models and propose temporal layer-adaptive optimization methods in the following sections.
## 3 Bi-DCVC: Bidirectional Extension of DCVC
### Model Architecture
Our work aims to improve the performance of unidirectional NVC models by leveraging the advantages of hierarchical B-frame coding. We first reproduce the existing unidirectional NVC model, deep contextual video compression (DCVC) [11], and further extend it to a bidirectional model, Bi-DCVC. Subsequently, we introduce Hi-DCVC that overcomes the limitations of Bi-DCVC through temporal layer-adaptive optimization methods. In this section, we explain the Bi-DCVC framework.
Similar to the DCVC, our bidirectional model, Bi-DCVC, also consists of motion estimation, motion encoding and decoding, context generation, and contextual encoding and decoding processes. Fig. 2 illustrates the overall architecture of the proposed Bi-DCVC, where the differences from DCVC are highlighted in blue. Bi-DCVC uses two reference frames \(\{\hat{x}_{l},\hat{x}_{r}\}\) to generate bidirectional contexts of an input frame. In the motion estimation stage, SPyNet [12] is employed to estimate two motion vectors \(m_{l}\) and \(m_{r}\), separately. These two motion vectors are concatenated across channels and compressed by a motion encoder. The motion encoder transforms these inputs into latent representation \(g_{t}\), which is uniformly quantized and then entropy coded by the mean-scale hyperprior entropy model [13]. \(\hat{g}_{t}\) represents the quantized latent feature of motion vectors, and the motion decoder reconstructs motion vectors \(\hat{m}_{l}\) and \(\hat{m}_{r}\) from the latent feature.
In the motion part, most components remain unchanged from DCVC. However, in some modules, the number of channels in the first or last layer is adjusted if needed. In addition, we also use a temporal prior encoder for extracting bidirectional temporal prior from \(\{\hat{x}_{l},\hat{x}_{r}\}\) in the motion entropy model to save the bits for bidirectional motion. The network architecture of the temporal prior encoder is the same as the temporal prior encoder in DCVC [11], except for the inputs and the number of channels of the first and last layer. The details of the model architecture of Bi-DCVC can be found in the supplementary material.
Similarly, the context generation and contextual coding parts have no significant changes from DCVC, where the warping process, feature extractor, and context refinement modules are performed twice each. The outputs of the context refinement module, \(\{\bar{x}_{l},\bar{x}_{r}\}\), are concatenated across channels and provided as inputs to the contextual encoder and decoder, as well as the entropy model. The contextual encoder transforms the input frame into a latent feature \(y_{t}\) conditioned on the bidirectional contexts, and the contextual decoder reconstructs the input frame \(\hat{x}_{t}\) from the decoded latent feature \(\hat{y}_{t}\). The contexts are also utilized in entropy encoding and decoding as additional inputs to the mean-scale hyperprior entropy model [13].
Figure 2: Overall architecture of the proposed Bi-DCVC with the differences from DCVC being highlighted in blue.
### Training Strategy
Our model is trained with the Vimeo-90k dataset [22] which contains 91,701 video sequences. During training, we randomly crop each frame to \(256\times 256\) patch. The training sequences are composed of seven consecutive frames each, and we randomly sample five frames, as denoted by \(\{x_{0},...,x_{4}\}\), to encompass various temporal distances. For I-frames, we compress \(x_{0}\) and \(x_{4}\) with an existing neural image codec [15, 16]. Regarding the remaining frames, B-frame coding is performed. The prediction structure for the B-frames is determined by two pre-defined lists of triplets: [(0, 1, 4), (1, 3, 4), (1, 2, 3)] or [(0, 3, 4), (0, 1, 3), (1, 2, 3)]. Each triplet comprises three indices \((l,t,r)\), where \(l\) and \(r\) represent reference frame indices, and \(t\) indicates the index of the frame to be encoded. For example, in the case of the first list, using \(\hat{x}_{0}\) and \(\hat{x}_{4}\) as reference frames, \(x_{1}\) is coded in B-frame. Subsequently, with \(\hat{x}_{1}\) and \(\hat{x}_{4}\) as reference frames, \(x_{3}\) is coded, followed by using \(\hat{x}_{1}\) and \(\hat{x}_{3}\) to code \(x_{2}\). For each iteration, one of the two lists is randomly selected to determine the prediction structure. The difference between the two lists lies in the temporal distances between forward and backward references from the input frame. Employing these two lists for training aims to introduce diversity in prediction structures.
The loss function for each frame \(x_{t}\) is defined as follows.
\[\begin{split} L_{t}=R(\hat{y}_{t})+R(\hat{z}_{t})+R(\hat{y}_{t}) +R(\hat{h}_{t})\\ +\lambda\cdot D(x_{t},\hat{x}_{t}),\end{split} \tag{1}\]
where \(\hat{y}_{t}\) and \(\hat{g}_{t}\) represent quantized latent feature of \(x_{t}\) and bidirectional motion vectors \(\{m_{l}\), \(m_{r}\}\), respectively. Additionally, \(\hat{z}_{t}\) and \(\hat{h}_{t}\) denote the hyperprior latent of \(\hat{y}_{t}\) and \(\hat{g}_{t}\), respectively. We used mean squared error for the distortion \(D(\cdot,\cdot)\) between input frame \(x_{t}\) and reconstructed frame \(\hat{x}_{t}\). The distortion term is multiplied by the Lagrangian multiplier \(\lambda\) which controls the trade-off between rate and distortion. Note that we compress three B-frames per sequence, and the model parameters are updated once with the averaged losses for each batch of sequences. We trained four models with a list of \(\lambda\) values, \(\Lambda=[117,227,435,845]\) to support multiple quality levels. We use batch size 4 and Adam optimizer [10], where the initial learning rate is \(10^{-4}\) until the validation loss becomes a plateau. Finally, the model is fine-tuned with a learning rate of \(10^{-5}\) for 50K steps.
### Experimental Results of Bi-DCVC
We used DCVC as the baseline model to evaluate our model, and for a fair comparison, we reproduced the DCVC model without the autoregressive context model [14]. Detailed explanations of the reproduced DCVC can be found in the supplementary material. To evaluate the compression performance of DCVC and Bi-DCVC, we used 97 frames from seven sequences in the UVG dataset [11], with intra-period 16 and 32 settings. For the Bi-DCVC model, we adopted a dyadic prediction structure similar to the one depicted in Fig. 1-(b). For instance, in scenarios using an intra-period of 16, four temporal layers (\(l\)=1, 2, 3, 4) were used for B-frame coding. Similarly to this, we used five temporal layers for intra-period 32.
The rate-distortion curves of DCVC and Bi-DCVC are shown in Fig. 3, and Bi-DCVC has superior performance to DCVC. Table 1 provides BD-rate gains for various video sequences and average gains, wherein Bi-DCVC achieves averaged -18.93% and -21.13% BD-rate gains against DCVC in terms of PSNR with intra-period 16 and 32, respectively. However, as can be seen in the table, relatively lower gains or even losses are observed for sequences containing large
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline & & & & & \multicolumn{6}{c}{**BD-rate (\%) gains (\(\downarrow\))**} \\ \cline{3-10} & Intra-period & Beauty & Bosphorus & HoneyBec & Jockey & ReadySteadyGo & ShakeNDry & YachiRide & **Average** \\ \hline
**Bi-DCVC** & 16 & +50.57 & -44.99 & -36.98 & -2.51 & -3.17 & -18.23 & -13.23 & **-18.93** \\ \cline{2-10} & 32 & +30.19 & -44.54 & -62.53 & +6.79 & +7.89 & -29.05 & -9.78 & **-21.13** \\ \hline
**Hi-DCVC (w/o TALS)** & 16 & +7.12 & -48.71 & -37.80 & -42.94 & -27.30 & -13.99 & -26.98 & **-32.47** \\ \cline{2-10} & 32 & -7.54 & -54.99 & -63.39 & -40.81 & -21.89 & -24.74 & -27.62 & **-37.72** \\ \hline
**Hi-DCVC** & 16 & +1.58 & -50.55 & -38.03 & -43.96 & -28.52 & -13.94 & -28.09 & **-34.43** \\ \cline{2-10} & 32 & -12.73 & -57.12 & -63.45 & -42.34 & -23.35 & -25.48 & -28.92 & **-39.86** \\ \hline \hline \end{tabular}
\end{table}
Table 1: BD-rate gains of Bi-DCVC and Hi-DCVC against DCVC in terms of PSNR.
Figure 3: Rate-distortion curves of DCVC and the proposed models in this paper for UVG dataset. DCVC* is our reproduced model of DCVC.
motions such as "Jockey" or "ReadySteadyGo". Furthermore, in the "Beauty" sequence with unpredictable hair-blowing motions, Bi-DCVC experiences significant performance degradation, up to a 50.57% BD-rate increase.
### Temporal Layer Analysis
To analyze the substantial performance degradation observed in some sequences, we first compare the prediction performance of Bi-DCVC with DCVC across different temporal layers. Secondly, we examine the frame-by-frame bit allocation and reconstruction quality for each temporal layer. We use the setting of intra-period 16 for this analysis.
**Prediction Performance** In traditional motion-compensated residual coding schemes, the difference between the predictive frame and input frame can be visualized to represent prediction performance. However, in models utilizing contextual coding schemes (Li, Li, and Lu 2021, 2022; Sheng et al. 2022; Li, Li, and Lu 2023), where temporal context is used as predictive feature maps instead of a predictive frame, the conventional visualization methods cannot be directly applied. Thus, we introduce a contextual prediction method where the estimated mean \(\mu_{t}\) of the latent feature \(\hat{y}_{t}\) is used as an input to the contextual decoder instead of \(\hat{y}_{t}\) itself to generate a predictive frame. This approach enables the comparison of prediction performance between models based on contextual coding.
Fig. 4 shows the residual images from the contextual prediction of the "HoneyBee", "Beauty" and "Jockey" sequences. Note that the bpp in this figure accounts only for the information used to generate the contextual prediction (_i.e._, bit amounts of \(\hat{g}_{t}\), \(\hat{h}_{t}\), and \(\hat{z}_{t}\)). Likewise, the PSNR in this figure is calculated with the contextual predictive frame against the original image. For the "HoneyBee" sequence, higher prediction PSNR is achieved even with fewer bits than DCVC, demonstrating bidirectional prediction's effectiveness for small motions. However, in "Jockey" and "Beauty" sequences which have large or complex motions, Bi-DCVC exhibits significantly lower prediction performance compared to DCVC, especially for the lower temporal layers.
**Frame-by-Frame Performance** Fig. 5 shows the frame-by-frame bit allocation and reconstruction quality. For the "HoneyBee" sequence with small motions, higher reconstruction quality with fewer bits than DCVC is observed regardless of temporal layers. However, for the "Jockey" sequence with large motions, the significantly lower reconstruction quality is observed in the first and the second temporal layer frames (_i.e._, the 8th, 4th, and 12th frames) despite more bits being allocated than DCVC. This phenomenon appears to stem from the larger distance between the reference frames and the input frame. Lastly, for the "Beauty" sequence with complex motion, a relatively lower reconstruction quality even with more bits is observed in the first tem
Figure 4: Visualization of the residual images from the contextual prediction of the “HoneyBee”, “Beauty” and “Jockey” sequences by DCVC and Bi-DCVC.
Figure 5: Comparisons of DCVC* and the proposed Bi-DCVC with frame-by-frame bit allocation and reconstruction quality for three sequences that have distinct motion types.
poral layer frame. In this case, the bidirectional model does not yield any gains, resulting in BD-rate loss.
## 4 Hi-DCVC: Temporal Layer-Adaptive Optimization
As analyzed in Section 3.4, Bi-DCVC exhibits notable performance degradation for large or complex motion sequences, particularly in the low temporal layers. To counteract this, a potential solution involves allocating additional bits to the lowest temporal layer, thereby improving reconstruction quality and mitigating prediction errors stemming from large temporal gaps to the reference frames. In this regard, we introduce a temporal layer-adaptive optimization strategy for bidirectional NVC models. More specifically, temporal layer-adaptive quality scaling (TAQS) and temporal layer-adaptive latent scaling (TALS) are proposed. The first one is a training strategy that employs distinct loss functions for different temporal layers, while the second is a method that scales latent features using temporal layer-wise scaling vectors. This allows for different operations for the temporal layers in a single model with small additional components. We designate a model that integrates these two methods atop Bi-DCVC as hierarchical DCVC (Hi-DCVC).
Temporal Layer-Adaptive Quality ScalingWe replace the existing loss function in Eq. (1) with Eq. (2) to enable the lambda values to be assigned differently according to the temporal layer indices (\(l=1,2,3\)).
\[\begin{split} L_{t}=R(\hat{y}_{t})+R(\hat{z}_{t})+R(\hat{g}_{t})+ R(\hat{h}_{t})\\ +\Lambda[B+2-l]\cdot D(x_{t},\hat{x}_{t}),\end{split} \tag{2}\]
where \(B\) denotes the base quality level (\(B\in\{1,2,3,4\}\)). For example, if \(B=1\) and \(l=3\), the smallest lambda value \(\Lambda[0]\) will be chosen. Note that the number of base quality levels corresponds to the number of trained models. In order to use this strategy for all the base quality levels, we expand the existing list of lambda values to \(\Lambda=[50,117,227,435,845,1625]\). In addition, the predefined lists of triplets have been modified as follows: [(0, 2, 6), (2, 4, 6), (2, 3, 4)] and [(0, 4, 6), (0, 2, 4), (2, 3, 4)], in order to ensure that prediction structures with large temporal distances are always included in the training process.
Temporal Layer-Adaptive Latent ScalingWhile employing distinct loss functions for different temporal layers to enable the model to learn bit allocation based on these layers is effective, achieving this solely with the same model parameters could be difficult. Therefore, we propose the concept of temporal layer-adaptive latent scaling, where the latent feature \(y_{t}\) is scaled by a scaling vector \(q_{l}\), which is trained for each temporal layer. The scaling process follows Eq. (3), and the scaled and quantized latent feature \(\hat{y}_{t}^{s}\) is rescaled with the same scaling vector \(q_{l}\) before being used as input to the decoder network.
\[\begin{split} y_{t}^{s}[h][w][c]=y_{t}[h][w][c]\times\frac{1}{q_{l }[c]}\times\frac{1}{Q[h][w][c]},\\ \hat{y}_{t}[h][w][c]=\hat{y}_{t}^{s}[h][w][c]\times q_{l}[c]\times Q [h][w][c],\quad l=1,2,3\end{split} \tag{3}\]
where \(h,w\), and \(c\) represent the horizontal, vertical, and channel indices of the latent feature, respectively. Besides \(q_{l}\), we also employ the spatial-channel-wise quantization (Li, Li, and Lu 2022) to enhance adaptability to spatial positions of the input frame. The 3D quantization volume \(Q\) is independent of temporal layers and obtained from the hyperprior decoder. We apply TALS to both latent features of bidirectional motions and of the input image. Note that in inference time, a deeper hierarchy may be used than the one used during training. In this case, we use applying the highest temporal layer used in the training stage for the deeper ones.
An illustration of the proposed TALS is shown in Fig. 6. While implementing the proposed method, we employ a structure akin to multi-granularity quantization (Li, Li, and Lu 2022) for variable rate support. However, a notable distinction from previous research lies in the fact that while earlier studies utilized the same scaling vector for a given quality level, in ours, different scaling vectors are utilized based on the temporal layer even when referring to the same quality level.
## 4 Experimental Results of Hi-DCVC
Using the same experimental settings as explained in the Bi-DCVC section, we evaluated the performance of Hi-DCVC. As shown in Fig. 3, Hi-DCVC demonstrates improved coding efficiency compared to Bi-DCVC in rate-distortion sense. In Table 1, Hi-DCVC shows -34.43% and -39.86% BD-rate gains against DCVC with intra-period 16 and 32, respectively. These experimental results indicate that by applying temporal layer-adaptive optimization to bidirectional NVC models, significantly improved performance can be achieved. Regarding the sequence-wise performance of Hi-DCVC, improvements are observed over Bi-DCVC for most sequences. Notably, for the sequences with large motion ("Jockey" and "ReadySteadyGO"), significant coding gains are shown with about 30-40% BD-rate reductions,
Figure 6: Illustration of the proposed temporal layer-adaptive latent scaling method combined with the spatial-channel-wise quantization (Li, Li, and Lu 2022).
eliminating the BD-rate losses against DCVC. Similarly, for the "Beauty" sequence, which experienced significant BD-rate losses in Bi-DCVC, a majority of these losses are eliminated, even achieving a -12.73% of BD-rate gain in the intra-period 32 setting.
Consistent with Section 3.4, we have visually depicted the frame-by-frame bit allocation and reconstruction quality for Hi-DCVC in Fig. 7. As intended, with the utilization of temporal layer-adaptive optimization methods, we observe that across all sequences, more bits are allocated to the lower temporal layer frames, resulting in higher reconstruction quality. Furthermore, since the higher-quality frames in the lower temporal layers are referenced by the higher-layer frames, the lower-layer frames tend to consume fewer bits than Bi-DCVC with similar or higher reconstruction quality. Particularly, the overall PSNR improvement is remarkable in the "Jockey" sequence with large motions. Even in the case of the "Beauty" sequence with complex motions, Hi-DCVC improves the overall coding performance by minimizing bit consumption in the higher temporal layers.
## 5 Conclusion
In this study, we tackled the challenges of extending unidirectional NVC models to incorporate bidirectional prediction structures using hierarchical B-frame coding. The introduced Hi-DCVC model achieved significant coding gains over the baseline models with the proposed temporal layer-adaptive optimization methods. Since our methods have little dependency on a specific NVC model architecture, they can serve as a general tool for extending unidirectional NVC models to the ones with hierarchical B-frame coding. It is worth noting that while our study provides valuable insights, it was limited to training with three temporal layers. Future studies could explore the potential for extending our methods to accommodate deeper temporal layers.
## Acknowledgement
This work is supported by Samsung Advanced Institute of Technology, Samsung Electronics Co., Ltd.
Figure 7: Comparisons of the proposed models, Hi-DCVC, Hi-DCVC, and Hi-DCVC (w/o TALS) with frame-by-frame bit allocation and reconstruction quality for three sequences that have distinct motion types.
## Appendix A1 Reproducing DCVC
In order to apply our proposed method, we employed the unidirectional baseline DCVC [111] and reproduce this model. We re-wrote the model's implementation by referencing the official implementation available at "_[https://github.com/microsoft/DCVC](https://github.com/microsoft/DCVC)"_, while the training code was developed by ourselves based on the DCVC paper. Although the components of the model remained nearly identical, we chose not to include the autoregressive context model [10] known for its substantial decoding time. For the sake of implementation convenience, we replaced the probability model for latent features from Laplacian distributions to Gaussian distributions. While DCVC originally employed the _cheng2020-anchor_ model from compressAI [1] as an intrad coding model, this model also incorporated the autoregressive context model. Thus, we adopted the intra model proposed in [111, 112] without the context model.
Fig. 8 shows the performance comparison between the official implementation model and our reproduced one. Since we employed a distinct intra-coding model from the original DCVC, we used different lambda values [117, 227, 435, 845] from the lambda values [256, 512, 1024, 2048] used in the original DCVC. The omission of the autoregressive context model led to marginal performance degradation. Building upon the reproduced DCVC, we implemented our proposed models, Bi-DCVC and Hi-DCVC.
## Appendix A2 Detailed Network Architectures of Bi-DCVC and Hi-DCVC
We aimed to enable bidirectional extension while making minimal modifications to the DCVC model, so we made adjustments to only a subset of layers from DCVC, as illustrated in Table 2. The differences from DCVC are highlighted in blue. Additionally, to extract a temporal prior for motion from bidirectional reference frames, which was absent in DCVC, we designed a temporal prior encoder similar to the one used in DCVC and incorporated it.
## Appendix A3 Qualitative Comparisons
Fig. 9 provides a qualitative comparison between DCVC and our models. In the case of DCVC, it consumes similar bits for all frames, resulting in progressively lower reconstruction quality for later frames due to error propagation. In contrast, both Bi-DCVC and Hi-DCVC allocate bits differently based on the temporal layers (\(l=1,2,3,4\)). For Bi-DCVC, even with relatively low bits allocated to the first and second frames referencing the high-quality I-frame (frame 0), it exhibits comparatively high reconstruction quality. However, in lower temporal layers, despite using more bits than DCVC, the reconstruction quality is significantly lower. In Hi-DCVC, a slightly higher allocation of bits is assigned to the lower temporal layers than in Bi-DCVC, improving reconstruction quality for all frames. Notably, for the first and second frames, Hi-DCVC achieves much higher reconstruction quality with significantly fewer bits than DCVC. These outcomes highlight improved preservation of details in distant foliage and grass as well as a reduction in the green-colored artifacts present on the horse's hooves.
| 神経動画圧縮 (NVC) は、急速に進化するビデオ符号化研究領域であり、一部のモデルは、最新のビデオ符号化標準である Versatile Video Coding (VVC) に対して、より優れた符号化効率を達成しています。従来のビデオ符号化標準では、Bidirectional Prediction構造を利用した階層的な Bフレーム符号化が広く研究されてきました。しかし、NVCでは、階層的な Bスキームの研究は限られており、この論文では、階層的な Bフレーム符号化を利用した NVC モデルを提案します。これは、既存の unidirectional NVC モデルを Bidirectional モデルに拡張することで実現し、 unidirectional モデルの BD-rate の改善は -21.13% に達しました。しかし、このモデルは、複雑なまたは大きな動きを持つシーケンスに適用されると、性能が低下することがあります。この問題に対処するため、時間層調整型最適化を導入しました。 |
2308.04844 | Scalability of Message Encoding Techniques for Continuous Communication
Learned with Multi-Agent Reinforcement Learning | Many multi-agent systems require inter-agent communication to properly
achieve their goal. By learning the communication protocol alongside the action
protocol using multi-agent reinforcement learning techniques, the agents gain
the flexibility to determine which information should be shared. However, when
the number of agents increases we need to create an encoding of the information
contained in these messages. In this paper, we investigate the effect of
increasing the amount of information that should be contained in a message and
increasing the number of agents. We evaluate these effects on two different
message encoding methods, the mean message encoder and the attention message
encoder. We perform our experiments on a matrix environment. Surprisingly, our
results show that the mean message encoder consistently outperforms the
attention message encoder. Therefore, we analyse the communication protocol
used by the agents that use the mean message encoder and can conclude that the
agents use a combination of an exponential and a logarithmic function in their
communication policy to avoid the loss of important information after applying
the mean message encoder. | Astrid Vanneste, Thomas Somers, Simon Vanneste, Kevin Mets, Tom De Schepper, Siegfried Mercelis, Peter Hellinckx | 2023-08-09T10:08:03 | http://arxiv.org/abs/2308.04844v1 | Scalability of Message Encoding Techniques for Continuous Communication Learned with Multi-Agent Reinforcement Learning
###### Abstract
Many multi-agent systems require inter-agent communication to properly achieve their goal. By learning the communication protocol alongside the action protocol using multi-agent reinforcement learning techniques, the agents gain the flexibility to determine which information should be shared. However, when the number of agents increases we need to create an encoding of the information contained in these messages. In this paper, we investigate the effect of increasing the amount of information that should be contained in a message and increasing the number of agents. We evaluate these effects on two different message encoding methods, the mean message encoder and the attention message encoder. We perform our experiments on a matrix environment. Surprisingly, our results show that the mean message encoder consistently outperforms the attention message encoder. Therefore, we analyse the communication protocol used by the agents that use the mean message encoder and can conclude that the agents use a combination of an exponential and a logarithmic function in their communication policy to avoid the loss of important information after applying the mean message encoder.
Keywords:Communication Learning Multi-Agent Reinforcement Learning
## 1 Introduction
Communication is an essential part of what makes humans intelligent and productive. The same thing can be said for multi-agent systems. The potential of these systems can be immensely improved by allowing inter-agent communication. Communication allows agents to overcome partial observability as well as
coordinate their behaviour. In recent years, research has been done to allow the agents to learn a communication protocol themselves, perfectly tailored to the goal and environment of the agents. With an increasing or varying number of other agents, the need arises to summarize the contents of these messages in a fixed size encoding to make sure the agents can deal with this large or varying number of incoming messages. In this paper, we investigate two ways to encode the messages: a mean communication encoder and an encoder which uses self-attention. We compare them to each other and to a no communication baseline. We investigate two different aspects of the environment. First we analyse the effect of increasing the amount of information that should be contained in a message. Secondly, we look at the effect of increasing the number of agents in the environment which also increases the number of incoming messages.
The remainder of this paper is structured as follows. First, in Section 2, we take a look at prior, related work about communication learning methods. Section 3 provides some background knowledge about reinforcement learning and self-attention. In Section 4, we present the methods that we use in this work. Next, we show the various experiments performed and compare the results in Section 5. We further discuss our results in Section 6. Finally, we form a conclusion and present some future work in Section 7 and 8 respectively.
## 2 Related Work
The research into communication learning using multi-agent reinforcement learning was introduced by Foerster et al. [3] and Sukhbaatar et al. [17]. Foerster et al. [3] presented two different communication learning methods that learn discrete communication called Reinforced Inter-Agent Learning (BIAL) and Differentiable Inter-Agent Learning (DIAL). Sukhbaatar et al. [17] proposed CommNet, a method to learn continuous communication between agents. Following their work, different methods to achieve inter-agent communication have been explored. Jaques et al.[5] encourage communication that results in a change in the action policy of the agents. In MACC [18] this is taken a step further by performing counterfactual reasoning about the outcome of alternative messages to evaluate the message that was sent. Similar to CommNet and DIAL, A3C3 [15] uses backpropagation to learn a communication protocol. A3C3 adds a centralized critic that learns a stationary value function which helps to learn the action and communication policies. Lin et al. [9] use autoencoders to learn an encoding of the observation which will then be communicated to the other agents.
Many of these works assume a constant or small number of agents. However, in many cases this is not realistic. When we want to communicate with a larger or variable number of agents, we have to create an encoding of the incoming messages. In CommNet [17], they take the mean of all incoming messages to account for a variable number of incoming messages. This approach was also taken by Singh et al. [16] in IC3Net. However, different techniques were also explored. ATOC [6] uses a bi-directional LSTM to encode the incoming messages. TarMAC [2] uses a variation of the attention mechanism where both the sender and
the receiver generate a value, which is then combined to generate the attention score for the message. This allows the agents to place varying importance on the incoming messages. The work of Peng et al. [13] specifically focuses on the challenge of encoding incoming messages while retaining all the necessary information. They propose an approach which combines a bi-directional recurrent neural network (RNN) and the attention mechanism. In these works, a variety of different message encoders have been presented. However, the scalability of these approaches has not been evaluated. In our work, we aim to give insight in how the mean message encoding and attention message encoding techniques perform when we increase the complexity of the environment as well as the number of agents. This will result in an increase in the information that has to be included in the messages and an increase in the number of incoming messages.
## 3 Background
In this section, we provide some background information for our research. First, we introduce the theoretical framework on which this research is built. Next, we give a detailed explanation of the attention mechanism.
### Markov Decision Processes
The methods proposed in this paper use the decentralized Markov Decision Process (dec-MDP) as proposed by Oliehoek et al. [12]. In a dec-MDP, at timestep \(t\) each agent \(a\in A\) receives an observation \(o_{t}^{a}\) of the global state \(s_{t}\) of the environment. Each agent individually is not able to observe the entire state of the environment. However, when we combine the information in the observations of all of the agents, we are able to construct the full state of the environment. This is called joint observability. Based on their observation, each agent will determine an action \(u_{t}^{a}\). These actions will result in a new state \(s_{t+1}\) and a reward for each agent \(r_{t}^{a}\).
### Self-Attention
The attention mechanism was introduced by Bahdanau et al. [1]. Prior to this, the most used sequence-to-sequence (seq2seq) mechanisms were recurrent neural networks (RNN)[14]. The big benefit of the RNN is that it is, in theory, capable of looking infinitely far back in the sequence. However, in practice when the length of the sequence increases, the RNN struggles to remember the information from the start of the sequence. Long Short Term Memory (LSTM) [4] addresses these concerns by using a gated architecture. However, it still has limits in the length of the sequence. Another downside of RNN's, more specific to the context of our work, is that it deals in sequences with a specific order. In our case, the incoming messages do not have a specific order and therefore the RNN architecture will be less suitable. For our research, we require an architecture that is designed to deal with a set of inputs of an undefined size. The requirements for this type of
architecture are defined by Zaheer et al.[21]. The most important requirement is that the operation is permutation invariant. This means that the output does not change when we change the order of the input elements.
Self-attention addresses the issues of RNN's and is widely used in natural language processing. It serves as one of the building blocks for the transformer proposed by Vaswani et al. [19]. Self-attention is able to compare the inputs with each other and calculate which inputs influence each other. Inside the scaled dot-product attention module presented by Vaswani et al.[19], calculations are done in a couple of steps. First, we derive a key, query and value for each of the inputs. Next, we calculate the attention score by taking the dot product of the query of an input and all the keys and dividing by the square root of the dimension of the keys \(d_{k}\). We take the softmax of all the attention scores that belong to the same query and multiply them with the values corresponding with the used key. Finally, we calculate the sum of all the weighted values to produce the output. To compute the attention output for multiple queries simultaneously, we pack the queries, keys and values into matrices \(Q\), \(K\) and \(V\) respectively. We can then calculate the output according to Equation 1[19]. In Section 4, we discuss how we can use self-attention in the context of message processing.
\[Attention(Q,K,V)=softmax\left(\frac{QK^{T}}{\sqrt{d_{k}}}\right)V \tag{1}\]
## 4 Methods
### CommNet
CommNet [17] is a communication learning method that allows the agents to send continuous messages to each other. Fig. 1 shows the architecture of Comm
Figure 1: CommNet
Net when using two communication steps and three agents. We omit the time index since we only describe a single timestep in the environment. Subscripts are used to indicate the communication step index and superscripts are used to indicate the agent index. When the agent index is omitted, we describe the collection containing this variable for all agents. The architecture contains three different models, the encoding model, communication model and decoding model. First, the encoding model takes the observation and calculates the corresponding hidden state \(h\).
\[h_{0}^{a}=g(o^{a}) \tag{2}\]
This hidden state is shared with the other agents. Since the hidden states are used as messages, we call the dimension of the hidden state the message size. All hidden states that arrive at the agent are encoded by the message encoder into a communication input \(c\). The observation of the agent is passed to the encoding function because the observation often contains information that is needed to determine which information in the messages is relevant for the agent. It is important to note that, when calculating the message encoding for a certain agent, only the observation of that agent is used not the observations of the other agents.
\[c_{i} =e(h_{i},o)\] \[=[c_{i}^{a}:a\in A] \tag{3}\] \[=[e^{a}(h_{i},o^{a}):a\in A]\]
The communication network uses the hidden state and the communication input to calculate the next hidden state. This process can be repeated for \(S\) communication steps.
\[h_{i+1}^{a} =f(h_{i}^{a},c_{i}^{a}) (i\in\mathbb{N}:0\leq i<S) \tag{4}\]
When all communication steps are completed the last hidden state is used by the decoding model to calculate a distribution over the action space. The action will be sampled from this distribution.
\[\pi^{a}=z(h_{S}^{a}) \tag{5}\]
During training, we first collect a batch of episodes using the current policy. This batch is then used to update our models using the loss function in Equation 6. The first part consists of the REINFORCE loss[20] where we reduce the variance on the reward by subtracting the mean of all rewards in the current batch as a baseline and dividing by the standard deviation of these rewards. The second part consists of an entropy loss that encourages the agent to explore, weighted by hyperparameter \(\beta\). The communication channel is differentiable so the loss can be backpropagated through the communication channel to provide feedback to the other agents.
\[\mathcal{L}=-log(\pi(o^{a}|u^{a}))\left(\frac{r^{a}-\mu}{\sigma}\right)+\beta \sum_{w^{a}}\pi(o^{a}|u^{\prime a})log(\pi(o^{a}|u^{\prime a})) \tag{6}\]
### Message Encoding
In this paper, we investigate two different message encoding techniques namely the mean and self-attention. These techniques will define the behaviour of the message encoder described in Equation 3.
#### 4.2.1 Mean Message Encoder
The mean message encoder takes the hidden states of the other agents and takes the mean of them to calculate the output. Taking the mean of the hidden states is the approach originally proposed by Sukhbaatar et al. [17]. The disadvantage of the mean encoder is that it gives each incoming message the same importance. Therefore, we cannot filter out irrelevant information or focus on specific information. Since the mean message encoder has less flexibility, we expect to see that information will be lost when creating the encoding. Therefore, given the mean of a set of messages it is not guaranteed that we can retrieve all the relevant information that was provided in these messages. For the mean message encoder, only the messages from the other agents are averaged. The message the agent sends will not be taken into account. To achieve this, we take the sum of all hidden states and add this to the vector containing the negative hidden states of the agents \((-h_{i})\). This results in a vector that, for each agent, contains the sum of the hidden states of the other agents. By dividing this by the number of other agents in the environment \((N-1)\), we get a vector \(c_{i}\) with, for each agent, the mean of the hidden states of the other agents. This can be seen in Equation 7.
\[c_{i}=e(h_{i},o)=\frac{1}{N-1}\left(-h_{i}+\sum_{a\in A}h_{i}^{a}\right) \tag{7}\]
#### 4.2.2 Attention Message Encoder
The second message encoder is based on the attention mechanism. We use the scaled dot-product attention as proposed by Vaswani et al.[19] which was explained in Section 3.2.2. The key and value for the attention mechanism are calculated using the incoming messages. The query is calculated based on the message of the current agent and its observation. We do this because the observation may include information that determines which information contained in the received messages is relevant to the agent.
\[Q=q(h_{i},o),K=k(h_{i}),V=v(h_{i}) \tag{8}\]
\[c_{i}=e(h_{i},o)=Attention(Q,K,V) \tag{9}\]
Since the attention mechanism is designed to be able to vary the importance of each of the inputs, the message encoder based on attention will be able to filter out unnecessary information. However, since the agents need to learn additional parameters to calculate the keys, queries and values, we expect training to be slower.
## 5 Experiments
In this section we describe our experiments. First, we explain the environment that was used in our experiments and how we scale this environment. Afterwards we analyse our results. In Figure 2, we can see a global overview of all the results. We first go into more detail for the results when we increase the number of labels and then we explain the results when increasing the number of agents. For each of the methods in all of the experiments we performed five runs with different random seeds. Our experiments are performed using RLlib [7] and Tune [8] which are built on the Ray framework [11]. We use parameter sharing between agents because it has been shown to improve training performance [3].
### Matrix Environment
For the experiments, we use a matrix environment, inspired by the Matrix Communication Games presented by Lowe et al. [10]. The environment consists of \(N\) agents and \(L\) labels. At the start of an episode, two labels are selected from the pool of \(L\) possible labels. Next, every agent randomly receives one of these two labels as its observation in a one-hot encoding. Because the distribution of the labels among the agents is random, it is possible that every agent receives the same label. The task of the agents is to say how many other agents received the same label. This results in a discrete action space. Each agent receives an individual reward of one if they correctly indicated the number of other agents with the same label or zero if they were unsuccessful. Therefore, the maximum reward that all of the agents can achieve together is equal to \(N\). During our experiments we normalize this reward by dividing the total reward of the agents together by the number of agents. This way the maximum reward the agents can achieve together will always be equal to one. The episodes are only one timestep long so the agents have only one opportunity to find the correct answer.
This environment is jointly observable to the agents because they can only see their own observations and not the full state. However, all the observations of the agents combined gives the complete state of the environment. In order to succeed, the agents need to communicate their label with the other agents. The environment is easily scalable because we can increase the number of agents by increasing \(N\) and increase the number of possible labels by increasing \(L\). Increasing \(N\) or \(L\) will both have a different effect on the environment. By increasing the number of agents \(N\), each agent will receive a larger number of messages and the action space will increase. When the number of labels \(L\) increases, each agent will need to be able to communicate a larger number of different labels in their messages.
### Baseline
In addition to the two approaches with different message encoders, we also trained a group of agents that was not allowed to communicate. This clearly shows the importance of communication and whether or not the communicating
agents are still benefiting from communication. The agents that cannot communicate will learn a policy that is nearly random. The no communication agents are implemented by using the same architecture as CommNet but without any communication steps. This means we only use the encoding and decoding networks without the communication network.
### Hyperparameters and Network Architecture
Our hyperparameters were determined empirically and using a grid search to get the best results for each experiment. The resulting hyperparameters are still mostly the same for each experiment and can be seen in Table 1. Table 2 shows the weight \(\beta\) for the entropy loss, which differs for each experiment. Throughout the experiments we kept the size of the networks and the message size constant. This was done to ensure a fair comparison and to clearly see the effect of scaling up the environment. The encoding model consists of a single linear layer with input size equal to the number of labels and output size equal to the message size, followed by a ReLU activation. The communication model is represented using a single linear layer with input size equal to twice the message size and output size equal to the message size, followed by a ReLU activation. Finally, the decoder model is a single linear layer with input size equal to the message size and the output size equal to the number of actions which is equal to the number of other agents, followed by a softmax function. The decoding model is the only model that can vary between experiments, since the action space changes when we change the number of agents, which will increase the output size of the network. The communication model will not be used in the experiments where communication is not allowed, making the complete network smaller. In addition to these models, the agents that use the attention message encoder will have three additional models to calculate the queries, keys and values. These three models are represented with a single linear layer and the output size is equal to the message size. The input size for the key and value networks is equal to the message size. For the query network the input size is equal to the message size plus the number of labels since the observation is included in the input.
### Scaling the Number of Labels
In this section, we present a series of experiments where we increased the number of labels (\(L\in\{3,8,16,24\}\)) while keeping the number of agents fixed (\(N=3\)). We experimented with four different values for \(L\) based on the message size (\(=16\)), two values that are smaller than the message size (\(L=3\) and \(L=8\)), one value that is equal to the message size (\(L=16\)) and one value that is larger than the message size (\(L=24\)). With a simple communication protocol that encodes the labels using a one-hot representation, we would need a message size equal to the value of \(L\). Using this communication protocol both of the encodings should allow for an optimal action policy to be learned. For a number of labels that is higher than the message size, we would expect the mean message encoder to lose information when multiple message combinations result in the same mean message.
Figure 1(a) shows the average performance and the standard deviation during the final \(10\%\) of training when we increase the number of labels. The performance of the agents that do not communicate is slightly higher than what the performance would be of a random agent. This is because the agents learn that some scenarios are more common than others and gain a small benefit from that. We see that for low values of \(L\) (\(L=3\) and \(L=8\)), both encodings are able to achieve the maximum reward. However, when the value of \(L\) increases we see that the performance of the attention message encoder starts to diminish. The performance of the mean message encoder drops only \(0.7\%\) between the experiment for \(L=3\) and the experiment for \(L=24\) while the performance of the attention based message encoder drops \(12.6\%\). In the setup of our experiments, we expected the performance of the mean message encoder to drop when the number of labels becomes higher than the message size due to the fact that every message has the same importance. If the mean message encoder can no
Figure 2: Results in the matrix environment
longer preserve all relevant information, the attention message encoder should outperform the mean message encoder since it is a lot more flexible and can selectively change the attention values to change the importance of each message. However, in our results we can see that the mean message encoder always outperforms the attention message encoder even for 24 labels. However, we can see that once the number of labels becomes higher than the message size, the performance of the mean message encoder also starts to decrease.
Figure 3 shows the evolution of the reward during training for each configuration. We can see that across all of the experiments, the attention message encoder trains slower than the mean message encoder. Both the mean and attention message encoders get slower when we increase the value of \(L\). However, this effect is more prevalent in the results of the attention message encoder.
### Scaling the Number of Agents
In this section, we present a series of experiments where we increased the number of agents (\(N\in\{3,8,16,24\}\)) while keeping the number of labels fixed (\(L=3\)). The message size is the same as in the previous experiments (\(=16\)). Figure 1(b) shows the average performance and the standard deviation during the final 10% of training when we increase the number of agents. Figure 4 shows the evolution of the reward during training for each configuration. In Figure 1(b), we see that when we increase the number of agents all of the methods perform worse. With an increasing number of agents, the action space becomes larger as well. This makes the problem a lot more complex. The agents that are not allowed to communicate still have a near random policy which results in a decreasing performance when
Figure 3: Results when increasing the number of labels
the action space becomes larger. When we compare the reward of the mean message encoder and the attention message encoder we can see a clear difference. The drop in reward is significantly larger for the attention message encoder. Even though the attention message encoder still has the lowest performance for 24 agents, we see that the difference becomes significantly smaller. In Figure 4 we see that the attention message encoder trains slower than the mean message encoder. Again, we see that this difference becomes larger at first but decreases again for 24 agents.
### Communication Analysis
To gain further insight in our results, we analyse the communication policy that is learned using the mean message encoder. For this purpose we use a simple
Figure 4: Results when increasing the number of agents
\begin{table}
\begin{tabular}{|c|c c c c|} \hline Label & Message[0] & Message[1] & Message[2] & Message[3] \\ \hline
0 & 2,0319 & 0,0000 & 1,9082 & 0,0000 \\
1 & 2,1820 & 0,0000 & 5,2146 & 0,0000 \\
2 & 12,2357 & 0,0000 & 7,6871 & 0,0000 \\
3 & 5,7066 & 0,0000 & 6,5708 & 0,0000 \\
4 & 3,3961 & 0,0000 & 5,5810 & 0,0000 \\
5 & 7,7053 & 0,0000 & 6,9608 & 0,0000 \\
6 & 1,9294 & 0,0000 & 3,7184 & 0,0000 \\
7 & 2,1637 & 0,0000 & 0,0000 & 0,0000 \\ \hline \end{tabular}
\end{table}
Table 3: Communication policy for the matrix environment with \(N=3\), \(L=8\) and a message size of four
version of the matrix environment with \(N=3\), \(L=8\) and a message size of four. Since the message size is smaller than \(L\), the agents cannot fall back on a one hot encoding of the labels. Table 3 shows the communication policy that the agents have learned when they consistently achieve the goal. We see that the agents only require two of the available four numbers to communicate the label info. Using this communication policy we can calculate all the possible mean values for all possible message combinations. These values can be seen in Figure 5. Here we see that each of the means is separable from the other values with the exception of two message combinations that cause a very similar mean value. The distance between these two mean points is \(5.11\times 10^{-2}\). This shows that the agents will be able to reliably determine the global state of the environment, except in this specific case.
If we want to describe the curve of the message values in Figure 4(b), we can use a parametric equation where we describe \(x\) and \(y\) in function of a common
Figure 5: Messages and the mean value of the possible combinations of two messages
Figure 6: The actual message values in function of parameter \(\tau\) compared to the function that best approximates the data according to our regression results.
parameter \(\tau\). We choose \(\tau\) in such a way that it increases as we go along the curve. We can plot both message values in function of parameter \(\tau\). This can be seen in Figure 6. The curves resemble an exponential function for message[0] and a logarithmic function for message[2]. Therefore, we can write the parametric equation of our message values as displayed in Equation 10.
\[\begin{cases}x=a\cdot 2^{\tau}+b\\ y=c\cdot ln(\tau+1)+d\end{cases} \tag{10}\]
We perform least squares linear regression to determine values for \(a\), \(b\), \(c\) and \(d\) that will result in the curves that most closely match the message data. This results in the following equation:
\[\begin{cases}x=0.083\cdot 2^{\tau}+2.026\\ y=3.761\cdot ln(\tau+1)-0.281\end{cases} \tag{11}\]
These regressions match our message data very well (\(R^{2}=0.98\) for the regression of message[0] and \(R^{2}=0.99\) for the regression of message[2]). Figure 6 shows the message values in function of \(\tau\) alongside the functions described in Equation 11. In Figure 4(b), we can see that the communication policy is very well suited to result in mean values, positioned at the midpoint of the line connecting the two messages, that can be separated from each other. By representing this curve as a parametric equation, we have determined that each of the message values can be represented using a different non-linear function. By learning a representation combining an exponential function and a logarithmic function for each of the message values, the agents were able to retain all necessary information in the encoding that the mean message encoder generates.
## 6 Discussion
In our experiments, we evaluated the performance of agents with either a mean message encoder or an attention message encoder. Across all the experiments, we saw that the mean message encoder outperforms the attention message encoder. Even when the number of labels exceeds the message size, the agents were able to find a communication protocol that achieves very good performance. By analysing the communication protocol in a small scale experiment, we were able to determine that the agents choose to represent the labels using a combination of an exponential and a logarithmic function. This ensures that no relevant information is lost.
We performed two types of experiments. In the first we increased the number of labels that the agents need to be able to communicate while in the second series of experiments we increased the number of agents. In the results summarized in Table 4, we see that the mean message encoder is not affected much by the increase in the number of labels while the performance of the attention message encoder clearly suffers. The difference in performance between the mean message
encoder and the attention message encoder keeps growing as we increase the number of labels. However, when looking at the second series of experiments, we see that the mean message encoder is affected by increasing the number of agents since the action space will grow as well. We can also see that the difference in performance between the mean message encoder and the attention message encoder does not keep growing as we increase the number of agents.
## 7 Conclusion
In this work, we evaluated the difference in performance between a mean message encoder and an attention message encoder when we increase the complexity of the environment and the number of agents. Intuitively, the attention approach seems the most ideal in this area since it can vary the importance of the different incoming messages. However, in the evaluated scenarios of the proposed matrix environment, we see that the mean message encoder consistently outperforms the attention message encoder. We were able to analyse the communication protocol of the agents that use the mean message encoder in a small scale scenario. The results showed that the agents use an exponential and a logarithmic function to avoid the loss of important information.
## 8 Future work
This work provides initial results that compare different message encoding techniques. For a full understanding of the advantages and disadvantages of each technique, we need to look into some more aspects of the message encoding problem. In this paper, we focus on the comparison between the mean technique and the attention technique. However, there are more possibilities. RNN's and a combination of RNN's and attention have successfully been applied in the past [6][13]. So far, we have only looked into continuous message encoding.
\begin{table}
\begin{tabular}{|c|c c c|c|} \hline & No Comm. & Mean & Attention & \(\Delta\) \\ \hline \(N=3\), \(L=3\) & \(0.379\pm 0.025\) & \(1.000\pm 0.000\) & \(1.000\pm 0.000\) & \(0.000\%\) \\ \(N=3\), \(L=8\) & \(0.380\pm 0.024\) & \(0.998\pm 0.004\) & \(0.995\pm 0.006\) & \(-0.301\%\) \\ \(N=3\), \(L=16\) & \(0.379\pm 0.024\) & \(0.999\pm 0.001\) & \(0.941\pm 0.051\) & \(-5.806\%\) \\ \(N=3\), \(L=24\) & \(0.379\pm 0.025\) & \(0.993\pm 0.008\) & \(0.847\pm 0.153\) & \(-14.703\%\) \\ \hline \(N=3\), \(L=3\) & \(0.379\pm 0.025\) & \(1.000\pm 0.000\) & \(1.000\pm 0.000\) & \(0.000\%\) \\ \(N=8\), \(L=3\) & \(0.149\pm 0.011\) & \(0.930\pm 0.137\) & \(0.858\pm 0.178\) & \(-7.742\%\) \\ \(N=16\), \(L=3\) & \(0.073\pm 0.006\) & \(0.564\pm 0.092\) & \(0.492\pm 0.104\) & \(-12.766\%\) \\ \(N=24\), \(L=3\) & \(0.065\pm 0.006\) & \(0.381\pm 0.091\) & \(0.363\pm 0.046\) & \(-4.724\%\) \\ \hline \end{tabular}
\end{table}
Table 4: Results for the different evaluated scenarios. We show the reward normalized according to the number of agents to get the reward per agent. We also show the percentage change in average reward when going from the mean message encoder to the attention message encoder.
The encoding of discrete messages poses some additional challenges. Due to the limited number of available messages, the risk of information loss is a lot higher for discrete communication. Therefore, in future work we want to look at more message encoding techniques and apply these to both continuous and discrete messages. Additionally, we also want to investigate their performance using different communication learning techniques. Finally, the environment used in our work has a constant number of agents and therefore the agents will receive a constant number of incoming messages at every timestep. Episodes in the matrix environment are also only one timstep long. In the future, we want to use more complex environments with a varying number of agents and longer episodes.
## Acknowledgements
Astrid Vanneste and Simon Vanneste are supported by the Research Foundation Flanders (FWO) under Grant Number 1S12121N and Grant Number 1S94120N respectively.
| 多数の多主体システムは、目標を適切に達成するために、間接的なコミュニケーションを必要とする。多主体強化学習技術を用いて、コミュニケーションプロトコルと行動プロトコルを学習することで、agentは情報を共有すべきかどうかを決定する柔軟性を獲得する。しかし、 agents の数は増大すると、これらのメッセージに含まれる情報のエンコーディングが必要となる。この論文では、メッセージに含まれる情報量を増やすことと、 agents の数が増やすことの両方の影響を調査する。これには、平均メッセージエンコーダと注意メッセージエンコーダの2つのメッセージエンコーディング方法を用いた評価が含まれる。私たちの実験は、マトリクス環境で行われた。驚くべき結果として、平均メッセージエンコーダが注意メッセージエンコーダを上回るという結果を示した。したがって、平均メッセージエンコーダを使用する agent のコミュニケーションプロトコルを分析し、その結果、agents は平均メッセージエンコーダを |
2306.12552 | SituatedGen: Incorporating Geographical and Temporal Contexts into
Generative Commonsense Reasoning | Recently, commonsense reasoning in text generation has attracted much
attention. Generative commonsense reasoning is the task that requires machines,
given a group of keywords, to compose a single coherent sentence with
commonsense plausibility. While existing datasets targeting generative
commonsense reasoning focus on everyday scenarios, it is unclear how well
machines reason under specific geographical and temporal contexts. We formalize
this challenging task as SituatedGen, where machines with commonsense should
generate a pair of contrastive sentences given a group of keywords including
geographical or temporal entities. We introduce a corresponding English dataset
consisting of 8,268 contrastive sentence pairs, which are built upon several
existing commonsense reasoning benchmarks with minimal manual labor.
Experiments show that state-of-the-art generative language models struggle to
generate sentences with commonsense plausibility and still lag far behind human
performance. Our dataset is publicly available at
https://github.com/yunx-z/situated_gen. | Yunxiang Zhang, Xiaojun Wan | 2023-06-21T20:36:55 | http://arxiv.org/abs/2306.12552v2 | # SituatedGen: Incorporating Geographical and Temporal Contexts into Generative Commonsense Reasoning
###### Abstract
Recently, commonsense reasoning in text generation has attracted much attention. Generative commonsense reasoning is the task that requires machines, given a group of keywords, to compose a single coherent sentence with commonsense plausibility. While existing datasets targeting generative commonsense reasoning focus on everyday scenarios, it is unclear how well machines reason under specific geographical and temporal contexts. We formalize this challenging task as SituatedGen, where machines with commonsense should generate a pair of contrastive sentences given a group of keywords including geographical or temporal entities. We introduce a corresponding English dataset consisting of 8,268 contrastive sentence pairs, which are built upon several existing commonsense reasoning benchmarks with minimal manual labor. Experiments show that state-of-the-art generative language models struggle to generate sentences with commonsense plausibility and still lag far behind human performance. Our dataset is publicly available at [https://github.com/yunx-z/situated_gen](https://github.com/yunx-z/situated_gen).
## 1 Introduction
In recent years, there has been substantial growth in new benchmarks evaluating commonsense reasoning for natural language processing (NLP) models, especially large-scale Pretrained Language Models (PLMs). Most existing commonsense reasoning benchmarks adopt natural language _understanding_ formats due to easy evaluation (e.g., accuracy), including multiple-choice question answering [46; 42; 20; 24], natural language inference [4], and detecting true/false statements [34; 45]. However, datasets measuring commonsense knowledge in natural language _generation_ are still relatively scarce. We aim to fill this research gap with a novel benchmark since real-world users of NLP systems would expect the generated outputs from LMs to be not only grammatically correct but also adhere to commonsense knowledge.
CommonGen[25], a generative commonsense reasoning challenge, has attracted wide attention recently. Given a set of keywords (e.g., {dog, frisbee, catch, throw}), the task requires models to compose a plausible sentence describing everyday scenario using all the provided keywords (e.g., "_The dog catches the frisbee when the boy throws it._"). While CommonGen focuses on social and physical commonsense in everyday life, it is unclear how well current commonsense generation models reason with factual knowledge about specific entities, which is referred to as _entity commonsense_[34]. In this work, we mainly consider geographical and temporal entities, as they provide extra-linguistic contexts [55] for commonsense reasoning and appear in a significant proportion of existing commonsense benchmarks (Section 4.2). To the best of our knowledge, we are the first to incorporate these situations into generative commonsense reasoning.
Furthermore, we argue that geographical and temporal contexts are important for commonsense reasoning. On the one hand, basic knowledge about geography and time is part of human commonsense [1; 6], such as _"Earth rotates on its axis once in 24 hours."_ On the other hand, certain types of commonsense knowledge are correlated with specific situations [53]. For example, _"July is summer"_ is true for people living in the northern hemisphere, while those living in the southern hemisphere would agree that _"July is winter"_.
Our proposed task SituatedGen (**Situated Generative Commonsense Reasoning**) requires the machines to generate a pair of contrastive sentences (formally speaking, _antifies_) with commonsense plausibility, given a group of keywords including geographical or temporal entities. For example, when provided with [July, United States, winter, Australia, summer, July], a reasonable output could be _"July is summer in the United States. July is winter in Australia."_, while a slightly different version _"July is summer in Australia. July is winter in the United States."_ does not adhere to commonsense.
The main challenge for machines to solve the SituatedGen task lies in _situated semantic matching_. In order to generate a pair of contrastive sentences, machines need to split the keywords into two groups (either explicitly or implicitly) based on geographical/temporal relevance and perform relational reasoning [32] within/between the keyword groups.
To study the challenging SituatedGen task, we construct a corresponding large-scale English dataset containing 8,268 pairs of situated commonsense statements. We design an automatic pipeline to collect data at scale with quality assurance and minimal human annotation efforts. Concretely, we derive commonsense statements with geographical or temporal contexts from existing commonsense benchmarks and mine contrastive sentence pairs based on entity-masked sentence similarity. We further manually filter out invalid examples in the test set to ensure the evaluation soundness. To assess the difficulty of our dataset, we conduct automatic evaluations on various generative (large) language models, including BART [22], T5 [40], and InstructGPT [35]. Results show these models lag far behind human performance, indicating that current models struggle to generate sentences adhering to commonsense under the SituatedGen setting. We believe that SituatedGen could serve as a complement to CommonGen and enrich the resource for evaluating constrained commonsense text generation in a more realistic setting.
The contributions of this work are three-fold: 1) We incorporate geographical and temporal contexts into generative commonsense reasoning and propose a novel task SituatedGen. 2) We construct a large-scale dataset to facilitate the studies of situated generative commonsense reasoning. The dataset is released and will contribute to the commonsense reasoning community. 3) We benchmark the performance of state-of-the-art generative language models on our dataset and demonstrate the difficulty of the task with a significant gap between machine and human performance.
## 2 Related Work
Constrained Commonsense Text Generation.Constrained Commonsense Text Generation [5] requires PLMs to generate commonsense text subject to a set of constraints. Commonsense generation models are currently evaluated by three tasks. First, Commonsense Explanation aims to generate an explanation for why a model selects a candidate answer to a given question. Second, \(\alpha\) NLG [4] is another commonsense generation task. The artificial intelligence models are provided with two observations in chronological order and need to generate a plausible hypothesis/explanation describing what happened between the observations. Third, in CommonGen [25], models should compose a plausible sentence describing everyday scenarios using all the provided concepts. This task has attracted much attention recently, and researchers advance machine performance on the dataset with contrastive learning [23], prototype editing [28], scene knowledge graph [49], etc. Our proposed task differs from these tasks with a focus on composing a _pair_ of contrastive sentences instead of a _single_ sentence and incorporating extra-linguistic contexts.
NLP Benchmarks with Geographical and Temporal Contexts.There are many emerging benchmarks in NLP that incorporate extra-linguistic contexts such as geographical and temporal contexts. Templama[15] and GeoMLama[52] probe language models with masked text prompts to query geographical and temporal knowledge. In question answering, McTaco[57], Torque[33] and TimeQA[9] contains challenging questions involving temporal commonsense reasoning over the
duration, frequency, temporal order, and other various aspects of events. SituatedQA [55] is made up of open-domain questions whose answers vary across different geographical and temporal contexts. TimeDial[39] studies temporal reasoning in dialogues with a multiple-choice cloze task. In vision-and-language tasks, GD-VCR [53] and MaRVL [27] aim to collect commonsense questions and statements that are visually grounded and geographically diverse. Previous work mainly focuses on how well language models trained on a specific snapshot of corpus can adapt to different contexts. While our dataset SituatedGen also considers such geographical and temporal contexts in language, we probe LMs for a new skill of reasoning for the commonsense relationship among extra-linguistic contexts. We also choose a different task format of generative commonsense reasoning, pioneered by [25], as it focuses on the commonsense reasoning capabilities of generative models rather than NLU models, which is under-researched by the community.
## 3 Task Definitions and Challenges
We use antithesis generation for evaluating generative commonsense reasoning under extra-linguistic contexts. In this section, we first introduce the definitions of our proposed task, followed by an analysis of the main challenges.
### Definitions
Antithesis.Antithesis refers to a figure of speech that expresses an opposition of ideas with a parallel grammatical structure of words, clauses, or sentences [29; 8]. An example of antithesis could be Neil Armstrong's famous quote "_That's one small step for a man, one giant leap for mankind_". In this work, we adopt a narrow sense of sentence-level antithesis, which means that two simple sentences with similar syntactic structures create a contradiction in semantics. Intuitively, the qualifying two sentences can be connected into a coherent sentence via conjunction words such as "while", "yet", and "whereas" (e.g., "_July is summer in the United States, while July is winter in Australia._"). We emphasize commonsense plausibility rather than the rhetorical effect of antithesis within the scope of this paper.
Extra-Linguistic Contexts.Following [55], we focus on two context types: geographical (geo) and temporal (temp). geo defines each context value as a geopolitical entity ("GPE"). temp defines each context value as timestamp ("DATE", "TIME", "EVENT").
Contextual Dependence.We define that a contrastive sentence pair is _context-dependent_ if swapping any of the geo or temp entities between the two sentences could lead to a contradiction with commonsense yet grammatical correctness. For example, for the sentence pair "_July is summer in China. July is winter in Australia._", if the two geo entities "China" and "Australia" are swapped, the resulting sentences do not adhere to commonsense anymore: "_July is summer in Australia. July is winter in China._" This indicates that they are context-dependent.
Contextual dependence is crucial for a proper evaluation of the generation results. Because sentence pairs that do not satisfy context dependence may have multiple valid answers (swapping the entity words leads to an extra correct answer), the metrics introduced in Section 6 cannot make a sound evaluation with only a single reference.
Situated Generative Commonsense Reasoning.We modify the mathematical formulation of the task CommonGen to define SituatedGen. The input of the task is a multiset1 consisting of \(k\) keywords \(x=[c_{1},c_{2},...,c_{k}]\in\mathcal{X}\), where each keyword \(c_{i}\in\mathcal{C}\) is a noun or entity, a single word or phrase. We denote \(\mathcal{X}\) as all possible combinations of keywords and \(\mathcal{C}\) as the vocabulary of keywords. Keywords in \(x\) should contain at least two geo or temp entities of the same type and two other keywords2.
Footnote 1: Multiset is a set that allows multiple instances for each of its elements.
Footnote 2: We do not explicitly provide the types of keywords in our dataset. The models are expected to infer which keyword is geo or temp if needed.
The output of the task is an unordered pair of coherent and plausible sentences \(y=\{s_{1},s_{2}\}\in\mathcal{Y}\) that satisfies the following conditions: 1) the sentence pair includes all keywords in \(x\); 2) each sentence has
at least one geo or temp keyword; 3) each sentence is geographical-temporal-semantically correct; 4) \(s_{1}\) and \(s_{2}\) form a pair of contrastive sentences, or antithesis; 5) \(s_{1}\) and \(s_{2}\) are context-dependent. The goal of the task is to learn a function \(f:\mathcal{X}\rightarrow\mathcal{Y}\) that maps a group of keywords \(x\) to a pair of sentences \(y\).
### Challenges: Situated Semantic Matching
As the goal of our task is to generate a pair of sentences instead of a single sentence, machines need to explicitly or implicitly classify the keywords into two subgroups based on their geographical and temporal semantic relevance, so as to generate one commonsense sentence with each subgroup. For example, given [July, China, winter, Australia, summer, July], the resulting keyword subgroups should be {July, China, summer} and {July, winter, Australia}.
During the process of keyword grouping and matching, machines need to make connections among keyword concepts with relational reasoning [32] over factual knowledge about these nouns and entities, a.k.a. _entity knowledge_[55], such as geographical location, temporal order, physical rules, social customs, etc. The matching process is important since wrong grouping results will lead to generated sentences without commonsense plausibility3.
Footnote 3: We note that under certain circumstances, wrong grouping results might produce correct answers via negative sentences. For example, the machine could generate “_July is **not** summer in Australia_” with {July, Australia, summer}. However, we observe that these are rare scenarios in our datasets, so we do not consider their confusing effects in our study.
We require that the two sentences should have similar syntactic structures and express similar relationships (e.g., "_X lives in Y_"). This is important for securing the difficulty of the task as it prevents models from learning shortcuts [18; 47] to group keywords based on trivial syntactic (e.g., POS tag of the word) and semantic (e.g., two different kinds of relationship) information.
## 4 Dataset Collection
To study the SituatedGen challenge, we construct a large-scale English dataset. We design a pipeline to collect high-quality data at scale with minimal manual annotation efforts. Figure 1 illustrates the overall pipeline for dataset collection, which consists of three steps:
1. **QA-to-statement.** Converting question-answer pairs of existing commonsense question answering benchmarks into corresponding statements.
2. **Contexts Identification.** Identifying all entities in a statement with a NER tagger and removing those statements without geo and temp entities.
Figure 1: An overview of data collection pipeline. Inside the dotted box is a final example in the dataset.
3. **Contrastive Sentences Mining.** Automatically mining contrastive sentence pairs (antithesis) from the remaining commonsense statements based on entity-masked sentence similarity.
### QA-to-Statement
Our dataset is composed of commonsense statements, which are simple sentences describing commonsense knowledge, e.g., "_You would find many canals in Venice._" In recent years, numerous commonsense reasoning benchmarks have been proposed and they form a potentially available commonsense knowledge base with high quality and diverse content. Inspired by recent benchmarks that are sourced from existing datasets [55; 38], we aim to extract commonsense statements from these commonsense benchmarks. We assume that the knowledge in these commonsense benchmarks is _actually_ commonsense instead of encyclopedic knowledge, though they might not be shared locally in certain groups of people due to a lack of geographical diversity. That being said, we adopt and follow the concept of "commonsense" widely used in existing works.
We conduct a holistic study of commonsense reasoning datasets to date and select five different data sources after considering their size, annotation quality, and reasoning difficulty. They are CREAK [34], StrategyQA [17], CommonsenseQA [46], ARC [11] and OpenbookQA [30], respectively. We briefly introduce the nature of each dataset in Appendix D.1. Since the raw data come in different formats such as multiple-choice questions and Yes/No questions, we apply a specific preprocessing method for each dataset to transform them (i.e., question-answer pairs) into statements. The transformation details are also included in Appendix D.1. In general, we collected 35,997 commonsense statements from the five source datasets (statistics in Table 1).
### Contexts Identification
We now filter out commonsense statements without geographical or temporal contexts. Following [55], we identify sentences with extra-linguistic contexts by geo and temp entities. We use FLERT4[43], a named entity recognition (NER) model, to extract all entities from a sentence and remove those statements without any geo ("GPE") or temp ("DATE", "TIME", "EVENT") entities.
Footnote 4: [https://huggingface.co/flair/ner-english-ontontotes-large](https://huggingface.co/flair/ner-english-ontontotes-large)
Table 1 shows that of all the commonsense statements extracted from the five source datasets, 6.6% sentences have geo contexts and 5.5% have temp contexts, which we count as a significant proportion. Finally, we obtain 4,038 (11.2%) commonsense statements with extra-linguistic contexts.
### Contrastive Sentences Mining
We aim to automatically mine contrastive sentence pairs from the commonsense statement corpus. Antithesis mining has not been studied in the existing literature, so we propose a pilot algorithm. We
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Dataset & \# Sent & \# geo & \# temp & \# geo & \# Valid \\ & & & & \& temp & Sent \\ \hline CREAK & 5,779 & 868 & 552 & 153 & 1,573 \\ StrategyQA & 4,976 & 501 & 366 & 86 & 953 \\ CommonsenseQA & 10,962 & 487 & 215 & 12 & 714 \\ ARC & 7,787 & 165 & 426 & 52 & 643 \\ OpenbookQA & 6,493 & 31 & 119 & 5 & 155 \\ \hline Total & 35,997 & 2,052 & 1,678 & 308 & 4,038 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of contexts identification results. “Sent” means the commonsense statements collected in Section 4.1. “geo”/“temp” refer to statements with _only_ geographical/temporal entities. “geo & temp” refers to statements with _both_ geographical and temporal entities. “Valid Sent” means the commonsense statements with geo or temp contexts.
Figure 2: An illustration of the contrastive sentence mining algorithm.
observe that after removing keywords from contrastive sentences, the remaining parts are very similar since antithesis sentences have parallel syntactic structures [8]. Based on this observation, we design the antithesis mining algorithm illustrated in Figure 2 consisting of three steps:
1. **Keyword Masking.** We extract all entities and other nouns as keywords in the sentence and replace each keyword with a [UNK] token, telling the pretrained language models to neglect the meaning of these keywords.
2. **Masked Sentence Similarity Matching.** We obtain the embedding of the keyword-masked sentence from a pretrained language model and calculate the cosine similarity between all possible sentence pairs.
3. **Rule-based Filtering.** We filter out invalid sentence pairs based on a fixed threshold of masked sentence similarity, number of keywords, and entity types.
We introduce the implementation of our antithesis mining algorithm in Appendix D.2. In this way, we efficiently extracted large-scale contrastive sentence pairs from all possible pairwise combinations of the aforementioned commonsense statements with extra-linguistic contexts5 (Section 4.2). For each contrastive sentence pair, we merge the keywords from each statement and randomly shuffle them to get the input data. The output is the concatenation of two statements.
Footnote 5: One statement might be paired with multiple statements, formulating multiple contrastive sentence pairs.
When splitting the data into training, validation, and test set, we explicitly require that one statement cannot appear simultaneously in any two sets. Consequently, there is no overlap of the single sentence (or sentence-level keyword combinations) among the training, validation, and test data. This requirement enforces machines to reason over new combinations of keywords during the inference stage instead of memorizing existing keywords matching results. Statements with similar syntactic structures will also be divided into the same set to reduce overlap of syntactic templates across different sets6. To ensure the evaluation soundness, we manually filter out invalid examples in the _test_ set that are not fluent antitheses or context-dependent. 13.6% of test data is removed and the final dataset has 8,268 examples in total. See additional details of manual filtering in Appendix E.
Footnote 6: Please refer to Appendix D.3 for details of our dataset splitting algorithm.
## 5 Dataset Analysis
### Quality Analysis
To measure the quality of our automatically collected data, we randomly select 100 examples (i.e. sentence pairs) from the validation set (which is not manually filtered) and annotate each example for whether it is actually 1) (fluent) antithesis and 2) context-dependent. We find that 87% of the data are real antitheses with fluency and 80% of the data satisfy both of the two requirements. Considering that our dataset is constructed through a fully automatic pipeline, this quality is pretty satisfying and can meet the needs of training and evaluation. As we have discussed in Section 3.1, test examples not satisfying contextual dependence can fool the evaluation metrics, since there are multiple valid references despite the single one provided in the test set. Thanks to the additional manual filtering at
the end of Section 4.3, the test set is now qualified for evaluation. As for the unfiltered training set, even if a contrastive sentence pair is not context-dependent, it is still valuable training data, satisfying the other requirements for the target side (Section 3.1). Reduced size of training data after potential manual filtering is also unfavorable to the learning of models. As a result, we retain all the examples in the training set. In Appendix E, we analyze the error cases in detail, including non-contrastive and non-context-dependent sentence pairs.
### Dataset Statistics
Table 2 includes the basic statistics of the SituatedGen dataset. If we use the ratio of unique statement count to sentence pair count ("# Unique Sents per Sent Pair") to represent the content/keyword diversity of the dataset, the validation set, and the test set are relatively high (0.22/0.28), compared to the training set (0.14).
Distribution of Numbers of Input Keywords.Figure 3 shows the distribution of numbers of input keywords for all examples in the dataset. Intuitively, more input keywords imply an increased number of possible combinations, making it more difficult for the models to handle. The average number of input keywords is 7.21 and the distribution is fairly symmetrical (skewness=-0.25), suggesting that the SituatedGen has a reasonable difficulty.
Distribution of Context Types.Here we define three context types of pairs of contrastive sentences: a geo pair of sentences contain only geo entities; a temp pair of sentences contain only temp entities; If both sentences contain geo and temp entities, the pair of sentences belongs to the type of geo & temp. We find that 78% of all sentence pairs are geo, 21% are temp and the rest 1% are geo & temp.
## 6 Methods
Baseline Models.We benchmark the performance of three prominent pretrained language generation models with encoder-decoder architecture -- BART [22], T5 [40], FLAN-T5 [10] -- and a decoder-only large language model (LLM) -- InstructGPT [35] with 175B parameters. We train BART, T5, and FLAN-T5 models in a fully supervised setting with the seq2seq format and expect that the models can learn to group keywords _implicitly_. Specifically, for the input of BART, we concatenate all shuffled keywords with a comma as the separation token "\(c_{1},c_{2},...,c_{k}\)". Regarding the input format of T5/FLAN-T5, we prepend the keyword sequence with a simple task description to align with its pretraining objective: "_generate two sentences with:_\(c_{1},c_{2},...,c_{k}\)". The outputs of all models are simple concatenations of the two target sentences \(s_{1}\) and \(s_{2}\). Since the output is an unordered pair, we feed two examples "\(x\to s_{1}\)\(s_{2}\)" and "\(x\to s_{2}\)\(s_{1}\)" to the model for each original training example. As for InstructGPT, we evaluate it in a few-shot setting. We build prompts with instruction and in-context demonstrations. For each test example, we randomly select 10 training examples as in-context demonstrations. We report the model hyper-parameters and GPT prompt format in Appendix F.1.
Evaluation Metrics.[25] have well established the automatic evaluation protocol of the generative commonsense reasoning task. They demonstrated a strong correlation between automatic metrics and human evaluation results. Since SituatedGen adopts a similar format of keyword-to-text generation to CommonGen, we follow the evaluation protocol of CommonGen and do not include an extra manual evaluation in our study.
Concretely, we employ several widely-used automatic NLG metrics based on n-gram overlap -- BLEU [37], ROUGE [26], METEOR [3] -- and image caption metrics that focus on the consistency of keywords and their relationships -- CIDEr [48] and SPICE [2]. In order to assess the validity of the generated outputs, we include BERTScore[56], a content-oriented and semantic metric. We also adopt COVERAGE, which is the average percentage of input keywords that are present in lemmatized outputs. Additionally, we report the accuracy of keyword grouping results7 as MATCH, which serves
as a good indicator of the commonsense plausibility of the generated texts. See Appendix F.2 for the implementation details of these evaluation metrics.
## 7 Results
In Table 3, we report the experimental results of different baseline models on the test set of SituatedGen. We approximate human performance with 100 randomly sampled examples from the test set which are annotated by the authors of this paper. We observe that larger models tend to have better performance than smaller ones, as larger parameters store more commonsense knowledge and provide better language generation quality. Notably, the few-shot InstructGPT surpasses other fully-supervised models in every metric, demonstrating its strong reasoning ability. Nevertheless, it still lags far behind human performance. For example, there is a difference of 13.3 points in MATCH, indicating the lack of commonsense in machine generations. The large gap of keyword-oriented metrics (CIDEr and SPICE) also suggests that models find it difficult to infer the relationship between keywords. The significant gap between models and humans demonstrates the difficulty of SituatedGen and leaves much room for improvement in future research.
\begin{table}
\begin{tabular}{l l} \hline \hline Input Keywords & 24 hours, axis, one month, Earth, axis, Moon \\ Reference & It takes one month for the Moon to rotate on its axis. Earth rotating on its axis takes \\ & 24 hours. \\ BART-base & The axis of the Moon is 24 hours. One month is one month. \\ BART-large & There are 24 hours in one month. \\ T5-base & Earth has a 24 hour axis. One month is one month. \\ T5-large & One month is one month on Earth. The Moon is 24 hours away from the axis of the \\ & Earth. \\ FLAN-T5-base & The Moon is the axis of the Earth. One month is one month. \\ FLAN-T5-large & The Moon is 24 hours away from Earth. One month is one month. \\ InstructGPT & The Earth takes 24 hours to rotate on its axis. The Moon takes one month to orbit \\ around the Earth. \\ \hline Input Keywords & Paul, Emperor, China, Qin, Russia, dynasty \\ Reference & The Qin dynasty reigned in China. Paul I of Russia reigned as the Emperor of Russia. \\ BART-base & The Emperor of China worked in China. Paul served as the first emperor of the \\ & dynasty Qin. \\ BART-large & Emperor of the Qin dynasty. Paul existed in Russia. \\ T5-base & China is a dynasty of China. Paul Qin is the Emperor of China. \\ T5-large & Paul was the Emperor of Russia. The Qin dynasty ruled China. \\ FLAN-T5-base & Paul was the emperor of China. The history of Russia includes the history of Qin. \\ FLAN-T5-large & The Emperor of Russia was Paul the Great. Qin dynasty existed in China and had \\ & history in Russia. \\ InstructGPT & Emperor Paul was part of the Russian dynasty. Qin was part of the Chinese dynasty. \\ \hline \hline \end{tabular}
\end{table}
Table 4: Case studies of machine generations. Keywords appearing in the generation results are underlined.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline Model (\# parameters) & \multicolumn{2}{c}{COVERAGE} & MATCH & BLEU-4 & ROUGE-2 & METEOR & CIDEr & SPICE & BERTScore \\ \hline BART-base (140M) & 78.3 & 60.5 & 22.7 & 29.9 & 29.6 & 18.3 & 53.9 & 48.4 \\ BART-large (400M) & 73.3 & 63.1 & 23.7 & 31.6 & 29.2 & 18.5 & 55.3 & 48.1 \\ T5-base (220M) & 75.6 & 55.3 & 21.9 & 28.7 & 29.8 & 17.4 & 53.6 & 46.2 \\ T5-large (770M) & 81.3 & 67.8 & 26.6 & 33.5 & 31.9 & 21.2 & 57.8 & 51.9 \\ FLAN-T5-base (220M) & 78.0 & 58.7 & 22.3 & 29.5 & 30.6 & 18.2 & 54.7 & 47.6 \\ FLAN-T5-large (770M) & 83.1 & 70.3 & 27.4 & 34.8 & 32.6 & 22.4 & 58.8 & 53.6 \\ geo & 83.1 & 70.8 & 26.8 & 33.9 & 32.4 & 21.9 & 58.2 & 52.8 \\ temp & 83.1 & 67.0 & 31.2 & 40.4 & 34.1 & 22.7 & 62.5 & 59.1 \\ \hline InstructGPT (175B, 10-shot) & **91.8** & **79.6** & **28.4** & **36.3** & **36.1** & **23.4** & **60.9** & **56.4** \\ \hline Human & 98.1 & 92.9 & 39.9 & 46.9 & 40.4 & 39.7 & 71.4 & 65.0 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Experimental results on the test set of SituatedGen. The best model performance is in **bold**. Human performance is tested on a subset of 100 random samples.
Performance across Different Context Types.Table 3 reports the performance of the FLAN-T5-large model across different context types. The results show that the matching accuracy of temp type is lower than geo, indicating that temporal-dependent test examples are more challenging. However, the amount of temp data is less than geo in the training set, which may also give rise to the performance difference. Interestingly, the generation fluency of geo type is worse than temp, suggesting that it is more difficult to use geo entities to compose sentences smoothly.
Case Study.Table 4 shows two groups of generation examples by different models. The first example belongs to temp type ("24 hours" and "one month") and the second one is geo ("Russia" and "China"). We find that models are prone to omit keywords in their outputs. For example, BART-large only covers 2 out of 6 keywords in the first example. Besides, most of the observed generated outputs are not commonsensical due to incorrect keyword grouping results, e.g., "_There are 24 hours in one month_". InstructGPT results seem to have the best generation quality and commonsense plausibility among other models, but it still demonstrates incompetence in handling the contrastive relationships between the two sentences.
## 8 Conclusion
In this paper, we introduce the challenging task SituatedGen to incorporate geographical and temporal contexts into generative commonsense reasoning. We build a corresponding testbed to evaluate the situated reasoning capabilities of state-of-the-art text generation models. The benchmark performance shows that models struggle to generate commonsensical sentences and lag far behind humans. Altogether, our data will serve as a challenging benchmark for measuring commonsense knowledge in generative language models and support research progress of constrained commonsense text generation in a more realistic situation.
| 最近の文章生成において、commonsense reasoningは注目を集めています。生成型commonsense reasoningは、機械に、キーワードのグループを与えると、commonsense plausibilityのある単一のcoherentな文章を生成するタスクです。既存のデータセットは、日常的なシナリオをターゲットにしていますが、機械が地理的なおよび時系列的背景下でどのように考えるかは、明確ではありません。私たちは、SITUATEDGENというこの困難なタスクを形式化しました。 commonsenseを備えた機械が、キーワードのグループに含まれる地理的または時間的なエンティティを含む、対比的な文章のペアを生成する必要があります。私たちは、8,268対の対比的な文章ペアを構成する対応する英語のデータセットを導入しました。これは、いくつかの既存のcommonsense reasoning benchmarksに基づいて、手動作業の最小限に抑えています。実験の結果、最新の生成型言語モデルは、commonsense plausibilityのある文章 |
2301.06580 | Construction and Analysis of a Discrete Heat Equation Using Dynamic
Consistency: The Meso-scale Limit | We present and analyze a new derivation of the meso-level behavior of a
discrete microscopic model of heat transfer. This construction is based on the
principle of dynamic consistency. Our work reproduces and corrects, when
needed, all the major previous expressions which provide modifications to the
standard heat PDE. However, unlike earlier efforts, we do not allow the
microscopic level parameters to have zero limiting values. We also give insight
into the difficulties of constructing physically valid heat equations within
the framework of the general mathematically inequivalent of difference and
differential equations. | Ronald E. Mickens, Talitha Washington | 2023-01-16T19:36:22 | http://arxiv.org/abs/2301.06580v1 | Construction and analysis of a discrete heat equation using dynamic consistency: the meso-scale limit
###### Abstract.
We present and analyze a new derivation of the meso-level behavior of a discrete microscopic model of heat transfer. This construction is based on the principle of dynamic consistency. Our work reproduces and corrects, when needed, all the major previous expressions which provide modifications to the standard heat PDE. However, unlike earlier efforts, we do not allow the microscopic level parameters to have zero limiting values. We also give insight into the difficulties of constructing physically valid heat equations within the framework of the general mathematically inequivalent of difference and differential equations.
Key words and phrases:Heat equation; Dynamic consistency; Random walk; Asymptotic analysis; Continuum limit 1991 Mathematics Subject Classification: 35K05; 39A14
## 1. Introduction
The purpose of this paper is to examine the meso-scale limit of a discrete micro-level mathematical model constructed to represent simple heat transfer. The creation of the microscopic model is based on the application of the principle of dynamic consistency (Mickens (2005, 2015, 2021)). Under the appropriate mathematical assumptions, we are able to obtain the previous results of Maxwell (1867), Cattaneo (1948), and Vernotte (1958) regarding the replacement of the standard heat equation
\[u_{t}=Du_{xx}, \tag{1}\]
by the generalization
\[\tau u_{tt}+u_{t}=Du_{xx}, \tag{2}\]
where \(D\) is the temperature diffusion constant, \(\tau\) is a time-lag parameter, and
\[u_{t}=\frac{\partial u}{\partial t},\quad u_{tt}=\frac{\partial^{2}u}{ \partial t^{2}},\quad u_{x}=\frac{\partial u}{\partial x},\quad u_{xx}=\frac{ \partial^{2}u}{\partial x^{2}}.\]
Note that \((x,t)\) are the one-dimensional space and time independent variables, and \(u=u(x,t)\) is defined over appropriate intervals of \((x,t)\) with suitable boundary conditions and initial values. The need for a generalization of Eq. (1) comes from the fact that the solutions of Eq. (1) transmit information at an infinite speed (Ali et al. (2005); Christov et al. (2005); Dreher et al. (2009); Guyer et al. (1966); Joseph et al. (1989)), a condition which violates the principle of causality (Ali et al. (2005); Christov et al. (2005); Dreher et al. (2009); Guyer et al. (1966); Joseph et al. (1989)).
For convenience, we now provide a resume of our major results: | ```
私たちは、微小モデルの熱伝達に関する Meso-レベルの動作を解析する新しい導出を提示します。この構築は、動的整合性原理に基づいています。私たちの研究は、必要に応じて、標準的な熱 PDE における変更を提供する主要な過去の表現を再構成して修正しています。しかし、これまでの取り組みとは異なり、微細レベルのパラメータにはゼロの制限値を許容しません。また、差分と微分方程式の一般的な数学的に不均衡な枠組みにおいて、物理的に有効な熱方程式を構築する難しさについて詳しく説明しています。
``` |
2307.02468 | An agile radio-frequency source using internal linear sweeps of a direct
digital synthesizer | Agile rf sources are a common requirement for control systems in quantum
science and technology platforms. The direct digital synthesizer (DDS) often
fills this role by allowing programmable control of the rf signals. Due to
limitations of the DDS architecture, implementing an agile rf source requires
rapid and precisely-timed programming of discrete updates that restrict the
source's agility. Here, we describe a microcontroller-based interface that
exploits the DDS's internal linear sweep accumulator to perform both sequential
linear sweeps, and standard discrete updates, at the ~10$\mu$s scale. This
allows updates to the swept parameter as fast as every 8 ns with greatly
reduced communication and memory overhead. We demonstrate the utility of this
system by using it as the reference to an optical phase-locked-loop to
implement rapid, adjustable laser frequency sweeps in a Rydberg
Electromagnetically Induced Transparency spectroscopy measurement. | Ethan Huegler, Joshua C Hill, David H Meyer | 2023-07-05T17:43:51 | http://arxiv.org/abs/2307.02468v2 | # An agile radio-frequency source using internal linear sweeps of a direct digital synthesizer
###### Abstract
Agile rf sources are a common requirement for control systems in quantum science and technology platforms. The direct digital synthesizer (DDS) often fills this role by allowing programmable control of the rf signals. Due to limitations of the DDS architecture, implementing an agile rf source requires rapid and precisely-timed programming of discrete updates that restrict the source's agility. Here, we describe a microcontroller-based interface that exploits the DDS's internal linear sweep accumulator to perform both sequential linear sweeps, and standard discrete updates, at the \(\sim 10\,\mathrm{\SIUnitSymbolMicro s}\) scale. This allows updates to the swept parameter as fast as every 8 ns with greatly reduced communication and memory overhead. We demonstrate the utility of this system by using it as the reference to an optical phase-locked-loop to implement rapid, adjustable laser frequency sweeps in a Rydberg Electromagnetically Induced Transparency spectroscopy measurement.
## I Introduction
Quantum information science & technology requires agile radio-frequency (rf) sources to satisfy the many demands of quantum control: for direct application to quantum systems, as drivers for acousto-optic modulators (AOMs), electro-optic modulators (EOMs), or inputs to various phase-locked-loops (PLLs). These basic tools find applications in many different quantum platforms, including trapped ions,[1] neutral-atom Bose-Einstein condensates,[2], superconducting qubits,[3] solid-state defect centers[4] and atomic clocks.[5] As control systems scale up to satisfy more challenging applications, the required agile rf sources also scale in number, making cost and availability important metrics along with their performance.
Direct digital synthesizers (DDSs) are commonly used in these roles as they satisfy many of the desired characteristics.[6] For example, the Analog Devices AD9959[7] is a DDS that is widely used thanks to its four phase-synchronous outputs, where each has individually programmable amplitude, phase, and frequency.[8] Moreover, it incorporates an internal linear sweep capability that can dynamically vary an output parameter. These features allow for a highly flexible rf source that scales well with control system size.
Arbitrary and agile waveforms, including non-sinusoidal waveforms, required in experimental systems are often more difficult to implement with a DDS than other purpose-built technologies such as arbitrary waveform generators. Beyond arbitrary outputs, the agility required (namely changes on a timescale faster than relevant dynamics being investigated) must remain synchronous with a larger experimental control system. A DDS can implement agile waveforms via rapid changes to the rf amplitude, phase, and/or frequency of its sinusoidal output. However, because the DDS's local memory only stores the current and next instruction, changing the output rapidly and synchronously with a larger control system requires frequent, precise, fast communications.
Current methods to solve this challenge typically use microcontrollers or field-programmable-gate-arrays (FPGAs) to rapidly program the DDS outputs point-by-point from a stored memory of instructions based on external triggers.[9; 10; 11] This method allows for arbitrarily varying amplitude, frequency, and phase waveforms. However, it has limited time resolution because the finite communication speeds involved result in each update requiring \(\gtrsim 1\,\mathrm{\SIUnitSymbolMicro s}\).
Here, we demonstrate an alternative method of controlling an AD9959 DDS that circumvents this limitation, leading to a more agile rf source. Similar to the existing methods, we employ a microcontroller to program the DDS. However, instead of programming successive static outputs individually, we can also employ the linear sweep functionality of the DDS itself. By successively programming new linear sweeps, we can generate an agile waveform that is synchronous with external triggers. Thanks to reduced communication overhead to the DDS, this technique allows for much finer time resolution (down to \(\sim 8\,\mathrm{ns}\)) when sweeping a single parameter. The technique maintains the ability to statically adjust all parameters at the DDS programming timescale (\(\sim 10\,\mathrm{\SIUnitSymbolMicro s}\)). Though this is insufficient for truly arbitrary waveform generation, it significantly broadens the range of applications to which the DDS can be applied. Furthermore, additional control can be obtained via augmentation with external voltage-controlled attenuators or phase shifters.
This work describes the hardware and firmware necessary for implementing what we have dubbed the DDS Sweeper. We first provide a brief overview of DDS operational principles and limitations. We then introduce the hardware used to implement the DDS Sweeper followed by an overview of the custom microcontroller firmware that provides an interface between a computer and the DDS. The open-source code and pre-compiled binary of the firmware are available online.[12] We demonstrate the operation of the DDS Sweeper via simultaneous measurement of the amplitude, phase, and frequency of the outputs for various test waveform patterns. Finally, we provide an example of the DDS sweeper's utility by using it as the frequency reference to an optical PLL in a Rydberg Electromagnetically Induced Transparency (EIT) spectroscopy measurement. We use the sweeper to enable rapid successive optical frequency sweeps of varying rates within a single measurement.
DDS operation
A DDS produces an output through the use of a phase accumulator and functions as a programmable fractional-frequency divider of an externally provided reference clock frequency.[13] A phase accumulator stores two values, the current phase and the phase increment value. On each clock cycle (derived from the external reference), the phase increment is added to the current phase value. The phase and increment values are stored in 32 bit registers, and the operation depends on the modulo \(2^{32}\) overflowing when adding 32 bit integers. The current phase value is passed through a phase to amplitude converter (typically a sine lookup table) and that amplitude is used as the input for a digital to analog converter (DAC). The filtered output of the DAC will be a sine wave of the frequency which has been programmed into the DDS. For the detailed description of DDS operation that follows, we use the AD9959 as a concrete example. Further details beyond this summary can be found in the AD9959 datasheet.[8]
To program a frequency into the DDS, that phase increment - or frequency tuning word (\(FTW\)) for the AD9959 - must be provided. The \(FTW\) can be calculated with \(FTW=\frac{f_{out}\cdot 2^{32}}{f_{ys}}\) where \(f_{out}\) is the desired output frequency and \(f_{ys}\) is the system clock frequency of the DDS. By nature of its operation, the DDS system clock frequency should be as high as possible to prevent discretization error. It should also have low phase noise to limit the rf output phase noise. For the AD9959, the maximum system clock frequency is 500 MHz, which can either be provided directly, or via a programmable PLL multiplier from a lower frequency at the expense of higher output phase noise.
The DDS also has the ability to control the phase and amplitude. There is a 14-bit phase offset register which stores a phase offset word (\(POW\)) which is added to the phase accumulator value before the phase to amplitude converter, allowing offsets spanning 0 to 360\({}^{\circ}\). \(POW\) can be found by \(POW=\frac{\Phi\cdot 2^{14}}{360}\) where \(\phi\) is the desired phase offset in degrees.
The amplitude is controlled by a 10-bit amplitude scale factor (\(ASF\)) which sets the ratio of maximum possible output current to the desired output current. It can be found with \(ASF=r_{I}\cdot 2^{10}\) where \(r_{I}\) is the desired ratio of output to maximum current.
The AD9959 DDS has additional functionality for performing a linear sweep of the output frequency, phase, or amplitude through the use of a sweep accumulator. The sweep accumulator is made up of a current value register and an 8-bit ramp rate counter register. When linear sweep mode is active, the current value register is added to the \(FTW\), \(POW\), or \(ASF\), depending on the parameter being swept. Fig. 1 shows the four parameters that define such a sweep: start point, end point, sweep delta \(DW\), and ramp rate \(RR\). The start and end points set the limits of the sweep while the sweep delta and ramp rate control the magnitude and duration, respectively, for each update of the accumulator. The start point, end point, and sweep delta register values are all calculated identically to the tuning word of the parameter being swept (i.e. \(FTW\), \(POW\), or \(ASF\)). The ramp rate is an 8-bit integer that is defined as \(RR=\Delta tf_{sync}\), where \(f_{sync}\) is the DDS sync clock which runs at one quarter of the system clock. The AD9959 also allows for distinct values of sweep delta and ramp rate depending on the direction of the sweep where going from start to end points is defined as rising, the opposite direction as falling. A dedicated digital input (Profile Pin) controls the sweep direction.
While in linear sweep mode, the DDS decrements the ramp rate counter register on every cycle of the sync clock. When the ramp rate counter reaches zero, the profile pin is checked. If the profile pin is logic high (low), the rising (falling) delta word \(DW\) is added to the current value register and the ramp rate counter register is reset to the rising (falling) ramp rate \(RR\). Once the output of the phase accumulator and the sweep accumulator add up to either the start or end point the output is held constant.
All of the control values - \(FTW\), \(POW\), \(ASF\), and other configuration options - are stored in registers on the DDS which can be written to via a serial interface. The DDS is equipped with an input buffer which stores all the writes it receives over the serial interface until an IO update signal is received. Upon receiving an IO update signal (IOUD) all of the registers are updated at once with their new values. Using a microcontroller to quickly write to this buffer and trigger updates allows the DDS to produce an agile waveform.
As mentioned in the Introduction, there are existing solu
Figure 1: Definitions of the parameters that define a generalized linear sweep for a DDS: Sweep Delta, Ramp Rate, Start Point, and End Point.
Figure 2: Comparative outputs of a frequency sweep using the DDS linear sweep mode (blue) versus the closest approximation using steps of the fastest stepping mode (orange). The IO update pulses for each sweep are also shown.
tions which implement agile waveforms via rapid static updates (i.e. only changing \(FTW\), \(POW\), and \(ASF\)).[9; 10; 11] These solutions do not utilize the DDS's linear sweep capability, largely because this mode can linearly sweep only a single parameter and is more challenging to implement than static updates. However, it is still desirable to use the native linear sweeps of the DDS in many applications. There are comparatively few examples in the literature that exploit these native linear sweeps.[14] Those that do are often portions of a larger control system and lack detailed descriptions of their use or utility as we provide here.
Figure 2 highlights the differences between a static update sweep (orange) and a native linear sweep (blue) when generating a linearly ramping frequency that then resets to its initial value. Using the native linear sweep, the DDS is only programmed twice (denoted by the corresponding IOUD trigger pulses), once for the sweep and once for the reset. The static update sweep requires programming of the DDS at each step. Since the programming time is finite, the sweep itself is discretized. By using the native linear sweep capability of the DDS, we can obtain higher quality linear sweeps with significantly lowered communication overhead between the microcontroller and the DDS itself. Furthermore, since communication rates between a host computer and the microcontroller are often limited as well, the large table of pre-programmed values necessary for the static update sweep leads to long programming times (\(\sim 1\,\mathrm{s}\)). This hinders rapid iteration of sweep parameters. The reduced overhead of native linear sweeps can therefore improve overall experiment control sequence programming time.
## III Hardware
The Sweeper is implemented using an Analog Devices 9959 (AD9959) evaluation board, and a Raspberry Pi Pico microcontroller board (based on the RP2040 microcontroller) to interface with a control system,[15] as seen in Fig. 3. The microcontroller is soldered to a custom interfacing PCB which provides the necessary routing and slots in the header pins of the DDS evaluation board. The DDS evaluation board also requires 3.3 V and 1.8 V power inputs, which the microcontroller can supply with the help of a linear regulator. The DDS outputs are programmed by sending serial commands to the microcontroller via a USB interface. The microcontroller interprets these commands and programs the appropriate registers of the DDS via a Serial Peripheral Interface (SPI) at 62.5 Mbits/s.[16]
The Raspberry Pi Pico was chosen as the microcontroller for its Direct Memory Access (DMA) channels and Programmable Input Output (PIO) cores. DMA channels are dedicated hardware for moving memory without utilizing the main core. We use the DMA to send the instruction table stored in memory to the SPI controller as quickly as possible while allowing the main core to simultaneously prepare the next instruction. The PIO cores are independent processor cores with limited functionality but direct access to the GPIO pins. The Pico's PIO cores allow the Sweeper to precisely time multiple aspects of its operation, including: programming of the registers via an SPI interface, the IO Update (IOUD) triggers, and the profile pins that control sweep directions.
In the default configuration, the microcontroller also provides its 125 MHz system clock to the DDS as a reference clock. With the default PLL multiplier of 4 this gives the DDS a system clock of 500 MHz, which is the maximum system clock supported by the AD9959. The Sweeper can also be configured to allow an external reference clock for the DDS.
## IV Firmware
The Sweeper operates in two modes: manual mode or buffered execution mode. Buffered mode has three sub-modes: single steps, linear sweeps, or a combination of both
In manual mode, instructions sent to the Sweeper over the USB interface update the outputs of the DDS in real time.
In buffered execution mode, the Sweeper accepts a sequence of instructions which it stores in memory. At a later point the Sweeper can recall the sequence and successively program the instructions into the DDS. To keep the outputs synchronized with a larger system, the buffered execution process can be triggered from an external source for each step of the sequence, or the microcontroller can time itself based on the Pico's clock.
The microcontroller writes each instruction of a sequence to the input buffer of the DDS immediately after the previous IO update has completed. When the time for updated output arrives, the IO Update signal is sent to the DDS and the next instruction is written to the input buffer of the DDS. This minimizes the delay between triggers and output updates. When receiving external trigger signals, the Sweeper has a pipeline delay of 4 clock cycles, at 125 MHz that will be 32 ns \(\pm 8\) ns, since the microcontroller buffers GPIO inputs to occur on clock cycle edges.
A buffered sequence can be programmed in one of three ways: as single steps, a sequence of linear sweeps, or a combination of both. Single stepping allows discrete changes to frequency, amplitude, and phase parameters simultaneously, replicating the mode of operation that others have implemented. The sweep and combination modes utilize the DDS linear sweep functionality, but require setting more registers of the DDS and therefore require a longer dwell period in between instructions to send all the required bytes, as seen in Tab. 1. Similarly, the instructions for sweep and combined
Figure 3: Block wiring diagram for the DDS Sweeper. A computer communicates with the Raspberry Pi Pico running the Sweeper’s firmware via a USB serial interface. The Pico then communicates directly with the AD9959 via an SPI. It also provides power and an optional reference clock for the DDS.
modes are longer so fewer of them can be stored in memory. Instructions are only stored for the channels being utilized, and the maximum number of instructions for each mode of operation can be seen in Tab. 2.
Once the sequence is programmed, sending a start command to the Sweeper will begin sending the instructions stored in memory. Since system memory will not persist through a power cycle, there is functionality to store and recover instruction sequences from non-volatile storage on the microcontroller board.
Users send instructions over USB to the microcontroller with the desired outputs in units of Hertz, Degrees, and percentages. The microcontroller calculates the tuning words from those values and translates them into the expected bit alignment for the DDS. The tuning resolutions for frequency, phase, and amplitude are 0.022\({}^{\circ}\), 0.116 Hz, and 0.1% of maximum output current, respectively.
If it is desired for the Sweeper to time itself, an additional parameter can be sent with the number of clock cycles the instruction should take. At runtime, these wait lengths are sent to a PIO core through DMA to handle the timing, based on the Prawnblaster Psuedoclock project.[17]
### Linear Sweep Control
Performing arbitrary sweeps on the DDS as part of a sequence requires great consideration of the internal operational details of the AD9959 DDS.
A reliable rising sweep depends upon the sweep accumulator being at zero when the sweep begins. If one sweep follows another, the sweep accumulator will not generally be at zero. The AD9959 does have an "autoclear sweep accumulator" functionality, which, when enabled, will reset the sweep accumulator to zero upon an IO Update signal. This does allow for running successive upward sweeps.
The AD9959 implementation of linear sweeps does not inherently support arbitrary falling sweeps. This is the because the sweep accumulator is an unsigned register that is always interpreted as positive and therefore added to the start point value: there is no way to subtract the sweep accumulation from the start point. Since the sweep delta is not applied if the current value is greater than the end point, the combined result is if the start point is larger than the end point the frequency output will remain constant at the start point value.
To run a falling sweep the start point must be programmed as the lowest output desired from the sweep, and the end point programmed as the highest output desired from the sweep. Then the sweep accumulator must first be filled up by a rising sweep so that when the profile pin is set to low the falling sweep delta can be subtracted from the sweep accumulator until the sweep accumulator holds a value of zero and the DDS output is equal to the value programmed into start point.
Since an arbitrary sequence cannot guarantee that all falling sweeps will be preceded by a rising sweep, we implement a hidden rising sweep before a falling sweep. The effect of these hidden sweeps is minimized by setting the rising sweep delta to its maximum value, then the sweep accumulator will be filled up by just one instance of the rising sweep delta being applied. If the rising ramp rate is set to 1, then it will only take one cycle of the sync clock (4 cycles of the system clock) to fill up the sweep accumulator (with the default 500 MHz system clock this is 8 ns). This works with the limitation that the falling ramp rate must be set to 1 to match the rising ramp rate that filled the accumulator. This limitation was determined empirically as the AD9959 behavior in this circumstance is not well documented.
With this limitation in mind, the minimum sweep rates differ between rising and falling sweeps. The maximum frequency sweep rate is \(\pm\)62.5 GHz/s while the minimum sweep rates are 57 kHz/s and \(-\)14.5 MHz/s. For the amplitude scale, the maximum rate is \(\pm\)100%/8 ns with minimum rates of 100%/2.1 ms and \(-\)100%/8.2 us. The phase maximum rate is \(\pm\)360\({}^{\circ}\)/2.1 us with minimum rates of 360\({}^{\circ}\)/33.43 ms and \(-\)360\({}^{\circ}\)/133.3 us.
An alternative solution to running downward sweeps is to turn the autoclear sweep accumulator functionality off for downward sweeps. Somehow this allows successive downward sweeps, but it makes the range of downward sweeps dependent on preceding upward sweep. If an upward sweep only fills the sweep accumulator half way, the next downward sweep will not be able to drain more than the half of the sweep accumulator, even if it was programmed to sweep over the full range of the accumulator.
## V Example Outputs
To confirm functionality of the sweeper, we implement independent measures of the amplitude, frequency, and phase
\begin{table}
\begin{tabular}{l c c c c} & \multicolumn{4}{c}{Num. of Outputs} \\ \cline{2-5} & 1 & 2 & 3 & 4 \\ \hline \multicolumn{5}{c}{Internally Timed} \\ \hline Single Stepping & 5000 & 5000 & 5000 & 4032 \\ Sweep Mode & 5000 & 3895 & 2611 & 1964 \\ Sweep and Step & 5000 & 3148 & 2108 & 1584 \\ \hline \multicolumn{5}{c}{Extermally Timed} \\ \hline Single Stepping & 16656 & 8615 & 5810 & 4383 \\ Sweep Mode & 8327 & 4234 & 2838 & 2135 \\ Sweep and Step & 6751 & 3422 & 2291 & 1722 \\ \hline \end{tabular}
\end{table}
Table 2: Maximum number of stored instructions for each buffered execution mode.
\begin{table}
\begin{tabular}{l c c c c c} & & \multicolumn{4}{c}{Num. of Outputs} \\ \cline{3-6} & & 1 & 2 & 3 & 4 \\ \hline \multirow{3}{*}{Single Stepping} & cycles & 500 & 750 & 1000 & 1250 \\ & \(\mu s\) & 4 & 6 & 8 & 10 \\ \cline{2-6} & cycles & 1000 & 1500 & 2000 & 2500 \\ & \(\mu s\) & 8 & 12 & 16 & 20 \\ \cline{2-6} & cycles & 1250 & 2000 & 2750 & 3500 \\ & \(\mu s\) & 12 & 18 & 24 & 30 \\ \hline \end{tabular}
\end{table}
Table 1: Minimum time between instructions for each buffered execution mode.
of the output(s). Due to the short timescales of the dynamic features of the DDS Sweeper, these measurements must be able to resolve similar timescales. To measure the amplitude, we use an oscilloscope with a 1 GHz bandwidth. The relative phase between two channels of the DDS (one serving as a fixed reference) is measured using a phase frequency detector provided by a HMC439 evaluation board.[7] The outputs of the detector are recorded by the same oscilloscope. The frequency is measured using a delayed self-homodyne measurement. The output of the recombining mixer is low-pass filtered and measured directly by the same oscilloscope. Delay line length and operating frequency shown in the plots were chosen to ensure the homodyne output was centered within the linear regime. By appropriate power splitting and amplification, all three measurements could be performed simultaneously on a single output.
In Fig. 4 we show demonstrative successive sweeps of the individual parameters. Each sub-figure represents a single sweep on a distinct parameter. The sub-figures also show the update pulses that mark when the microcontroller sent a new instruction to the DDS. For all of these sequences, the DDS Sweeper uses its own internal timing to determine the start of each instruction.
Fig. 4(a) shows a measurement of a constant 100 MHz output from the DDS as the amplitude scale factor is swept. In Fig. 4(b), two outputs of the DDS are kept at a constant 100 MHz. The output of channel 0 is held at a constant phase offset while the channel 1 output is run through a sequence of phase changes spanning 0 to 2\(\pi\). Fig. 4(c) shows a measurement of the output frequency. Note that this sequence includes successive upward and downward linear sweeps with different sweep rates. These sequences were chosen to have a mixture of linear sweeps and discrete jumps of the varying parameter in order to demonstrate the flexibility of the DDS Sweeper.
Fig. 5 shows the Sweeper operating in Sweep and Step mode, where a single parameter can employ linear sweeps and the other parameter can be discretely changed at each update. Here the frequency is the parameter being swept (c) while amplitude (a) and phase offset (b) are stepped simultaneously. Part (d) shows the the update pulses from the microcontroller. This dynamic waveform only sends 9 instructions to the DDS, greatly reducing communication overhead.
In its default configuration, the DDS Sweeper derives the DDS reference clock from the Pico's on-board crystal oscillator. Because this reference is used directly to produce the outputs, it's phase noise will have a strong impact on the phase noise of the outputs. Using a Berkeley Nucleonics 7340 phase noise tester,[7] we measured the phase noise of the DDS when clocked directly from the Pico at three frequencies: 100.3, 75.1, and 40.1 MHz, as shown in Fig. 6. The overall noise
Figure 4: Demonstrative measurements of the DDS Sweeper capabilities showing (a) amplitude sweep, (b) phase sweep, (c) frequency sweep. Each trace also shows the digital IO update pulses marking the start of each new instruction to the AD9959.
Figure 5: Demonstrative measurements of the DDS Sweeper capabilities showing (a) amplitude steps, (b) phase steps, and (c) frequency sweep performed simultaneously on a single output channel. Also shown is the (d) digital IO update pulses marking the start of each new instruction to the AD9959.
floor is approximately 20 dB higher than the minimum noise of the DDS (when using the on-board PLL and multiplier) and the large peak around 200 kHz from the PLL is more pronounced. This is to be expected as the Pico's crystal oscillator is not designed to be highly performant. If improved phase noise is required for a given application, the DDS Sweeper can be configured to allow for an externally-provided reference of the DDS.
## VI Example usage
Finally, we demonstrate the sweeper's utility by performing a rapid series of spectroscopy scans of varying duration. This is accomplished by using a channel from the Sweeper as the reference for an optical Phase-Locked-Loop. This method of stabilization compares the rf beatnote between two lasers. A first (reference) laser is independently stabilized to atomic spectroscopy. The second (controlled) laser receives feedback from the optical PLL to stabilize the beatnote to the frequency set by the rf reference (DDS sweeper). As the rf reference is swept, the controlled laser will be swept. Using an agile rf reference allows for agile, precise changes in the control laser's frequency.
Our demonstration measures Rydberg Electromagnetically-Induced-Transparency (EIT) spectroscopy, using the experimental apparatus and technique described in Ref. [18]. This measurement involves a 780 nm probe laser and a 480 nm coupling laser that counter-propagate through a room-temperature vapor cell filled with natural isotopic abundance rubidium. The probe laser nominally couples the \(|5S_{1/2}\rangle\) ground state with the \({}^{85}\)Rb \(|5P_{3/2}\rangle\) first excited state. Its frequency difference (detuning) relative to the non-Doppler-shifted atomic resonance at \(\delta_{p}\)=0 is controlled via the optical PLL described previously. The coupling laser couples \(|5P_{3/2}\rangle\) to the \(|56D_{5/2}\rangle\) Rydberg state and is kept resonant with this transition. When both lasers are resonant, EIT is established between the ground and Rydberg state, leading to reduced atomic absorption of the probe light. See Figure 7(a) for the level diagram. By using an optical homodyne measurement of the transmitted probe light, we measure the corresponding phase shift of the probe field. The photodiode output is recorded using a 50 \(\Omega\) terminated oscilloscope.
The goal of this demonstration is to empirically determine in a single continuous measurement, how quickly the probe laser could be swept through resonance without introducing errors or distortion to the dispersive EIT signal. To this end, we program the Sweeper to linearly sweep in frequency between two fixed points at variable rate, reset to the sweep start point, then dwell for 1 ms to allow the probe laser frequency to settle before the next sweep. For each sweep, the probe detuning spans from \(-30\) to 30 MHz and the sweep rate is adjusted by changing the sweep duration. The optical PLL contains a x16 multiplier of the rf reference so the total sweep of the DDS output is 3.75 MHz.
Figure 7(c) shows a single output spectroscopic signal trace as the sweep period, \(\tau_{5}\), was increased from 100 us to 600 ms at multiples of 1, 3, and 6. The lightly shaded regions denote the 1 ms reset window between each sweep where the reverse EIT signal can be seen as the optical PLL moves the probe frequency back to the sweep start at a finite rate due to the instantaneous frequency change. From this measurement, it is clear that the critical sweep rate for which slower sweeps do not distort the signal corresponds to a period between 100 and 300 us.
A second timetrace is shown in Figure 7(d), with sweep times spanning 250 to 300 us in steps of 10 us. These sweeps show the critical sweep rate corresponds to a period of approximately 270 us or 0.22 MHz/us. From repeated measurements, we further determined that this critical rate is borderline distortionary to the signal on average, with random fluctuations in experimental parameters causing the measured critical period to fluctuate on the order of \(\pm 5\) us.
Figure 7(b) shows the sweeps with 600, 30, 1, 0.3, 0.27, and 0.25 ms periods versus probe detuning \(\delta_{p}\). The apparent wider trace widths as the sweep time is increased is an artifact of each sweep being sampled at the same rate, which makes the longer sweeps more susceptible to electronic noise. However, there is also evident a slight broadening of the dispersive feature as the sweep time is increased, attributable to systematic errors in the measurement at longer timescales (such as finite laser frequency stability or varying background electric fields). As the sweep time is decreased, the optical PLL bandwidth is reached. This results in the EIT feature dragging away from resonance and observable local ringing in the frequency, as seen in the 270 and 250 us sweep traces of Figure 7(b).
## VII Conclusion
We have described a custom firmware for the Raspberry Pi Pico microcontroller that can control an Analog Devices AD9959 four-channel DDS. By exploiting the inherent linear sweep accumulators of the DDS, our implementation (the Sweeper) achieves agile rf outputs and fast updates that are synchronous with an external control system. We also demonstrated various types of waveforms that can be produced. Moreover, by referencing an optical phase locked loop to the Sweeper, we highlighted a example application by performing spectroscopic sweeps of a Rydberg EIT signal.
The Sweeper is capable of filling an important role in any application where simple, agile, rf sources are ubiquitous. Quantum science experiments are a noteworthy exam
Figure 6: Phase noise of the DDS output when clocked by the Pico at 100.3, 75.1 and 40.1 MHz output frequencies. The phase noise is dominated by the phase noise of the Pico’s crystal oscillator from which the clock signal is derived.
ple where demands for increased scale could make it advantageous compared to contemporary alternatives, such as arbitrary waveform generators. Beyond its performance, the device consists of readily-available cost-effective components that have favorable size, weight, and power characteristics. These features make the Sweeper a candidate for replacing or supplementing more expensive rf sources used in many experiments.
There are multiple potential means of enhancing the DDS Sweeper firmware. The Sweeper currently uses the standard SPI protocol for communications between the Pico and the AD9959. This protocol could be modified to use a custom multi-channel SPI version (supported by the DDS) allowing up to a factor of four decrease in the time required to program the DDS at each update. The serial interface between the control computer and the Pico could also be improved by using a more efficient character encoding than the standard ASCII we have implemented. This could decrease the time required to send instructions to the DDS Sweeper by approximately a factor of two. Finally, the flash memory of the Pico board could be leveraged to increase the total number of instructions that can be programmed in a single sequence.
###### Acknowledgements.
We wish to acknowledge Kevin Cox for helpful discussions. EH recognizes financial support from the National Security Scholars Summer Internship Program (NSSSIP). The views, opinions and/or findings expressed are those of the authors and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.
## Author Declarations
### Conflict of Interest
The authors have no conflicts to disclose.
### Author Contributions
**E. Huegler**: conceptualization (supporting); software (lead); investigation (lead); visualization (equal); writing - original draft (equal); writing - review and editing (equal) **J. C. Hill**: investigation (supporting); writing - review and editing (equal) **D. H. Meyer**: conceptualization (lead); investigation (supporting); visualization (equal), writing - original draft (equal); writing - review and editing (equal)
## Data Availability Statement
The data presented are available from the corresponding author upon reasonable request.
| 量子科学技術プラットフォームにおける制御システムの一般的な要件である、敏捷なRF源は、直接デジタルシンセサイザー(DDS)がその役割を担うことが多いです。DDSは、RF信号の可変制御を可能にするため、この役割を果たします。DDSのアーキテクチャには、敏捷なRF源の実装に制限が生じます。これは、デジタルアップデートの迅速かつ正確なプログラミングが必要であり、その結果、源の敏捷性が制限されます。ここでは、DDSの内部の線形スウィープ累計器を利用したマイクロコントローラ基盤のインターフェースを記述します。このインターフェースは、連続線形スウィープと標準的なデジタルアップデートを同時に実行し、約10$\mu$sのスケールで実行します。このことで、スウィープパラメータの更新は、8 ns毎に実行され、通信とメモリオーバヘッドを大幅 |
2305.14473 | Weighted maximal inequalities on hyperbolic spaces | In this work we develop a weight theory in the setting of hyperbolic spaces.
Our starting point is a variant of the well-known endpoint Fefferman-Stein
inequality for the centered Hardy-Littlewood maximal function. This inequality
generalizes, in the hyperbolic setting, the weak $(1,1)$ estimates obtained by
Str\"omberg in "Weak type L1 estimates for maximal functions on noncompact
symmetric spaces", Ann. of Math. 114 (1981), where Str\"omberg answered a
question posed by Stein and Wainger in "Problems in harmonic analysis related
to curvature", Bull. Amer. Math. Soc. 84 (1978). Our approach is based on a
combination of geometrical arguments and the techniques used in the discrete
setting of regular trees by Naor and Tao in "Random martingales and
localization of maximal inequalities", J. Funct. Anal. 259 (2010). This variant
of the Fefferman-Stein inequality paves the road to weighted estimates for the
maximal function for $p>1$. On the one hand, we show that the classical $A_p$
conditions are not the right ones in this setting. On the other hand, we
provide sharp sufficient conditions for weighted weak and strong type $(p,p)$
boundedness of the centered maximal function, when $p>1$. The sharpness is in
the sense that, given $p>1$, we can construct a weight satisfying our
sufficient condition for that $p$, and so it satisfies the weak type $(p,p)$
inequality, but the strong type $(p,p)$ inequality fails. In particular, the
weak type $(q,q)$ fails as well for every $q < p$. | Jorge Antezana, Sheldy Ombrosi | 2023-05-23T19:02:41 | http://arxiv.org/abs/2305.14473v1 | # Weighted maximal inequalities on hyperbolic spaces
###### Abstract.
In this work we develop a weight theory in the setting of hyperbolic spaces. Our starting point is a variant of the well-known endpoint Fefferman-Stein inequality for the centered Hardy-Littlewood maximal function. This inequality generalizes, in the hyperbolic setting, the weak \((1,1)\) estimates obtained by Stromberg in [17] who answered a question posed by Stein and Wainger in [16]. Our approach is based on a combination of geometrical arguments and the techniques used in the discrete setting of regular trees by Naor and Tao in [11]. This variant of the Fefferman-Stein inequality paves the road to weighted estimates for the maximal function for \(p>1\). On the one hand, we show that the classical \(A_{p}\) conditions are not the right ones in this setting. On the other hand, we provide sharp sufficient conditions for weighted weak and strong type \((p,p)\) boundedness of the centered maximal function, when \(p>1\). The sharpness is in the sense that, given \(p>1\), we can construct a weight satisfying our sufficient condition for that \(p\), and so it satisfies the weak type \((p,p)\) inequality, but the strong type \((p,p)\) inequality fails. In particular, the weak type \((q,q)\) fails as well for every \(q<p\).
2020 _Mathematics Subject Classification: 43A85_
_Keywords: Hyperbolic space, Fefferman-Stein inequality, weighted estimates._
J. Antezana was supported by grants: PICT 2019 0460 (ANPCyT), PIP112202101 00954CO (CONICET), 11X829 (UNLP), PID2020-113048GB-I00 (MCI). S. Ombrosi was supported by PID2020-113048GB-I00 (MCI).
operator is the same in both spaces, \(\mathbb{R}^{n}\) and \(\mathcal{H}^{n}\). However, this is not the case in general, and it will be reveled by analyzing weighted estimates. More precisely, to complete the answer to Stein-Wainger's question we study an end-point two-weight Fefferman-Stein inequality for \(M\) in the hyperbolic setting.
### Fefferman Stein type inequality
In the Euclidean setting, the classical Fefferman Stein inequality [4] is
\[w\left(\{x\in\mathbb{R}^{n}\,:\,Mf(x)>\lambda\}\right)\lesssim\frac{1}{\lambda }\int_{\mathbb{R}^{n}}|f(x)|\,Mw(x)dx,\]
where \(w\) is non-negative measurable function (a weight) defined in \(\mathbb{R}^{n}\), and \(w(E)=\int_{E}w(x)dx\). This is a cornerstone in the theory of weights, and a powerful tool to consider vector valued extension of the maximal function \(M\). This result follows from a classical covering lemma, which is not available in the hyperbolic setting. Indeed, in this setting
\[\mu_{n}\Big{(}B_{H}(x,r)\Big{)}=\Omega_{n}\int_{0}^{r}(\sinh t)^{n-1}dt\sim_{ n}\frac{r^{n}}{1+r^{n}}e^{(n-1)r}, \tag{1.1}\]
where \(\Omega_{n}\) is the euclidean \((n-1)\)-volume of the sphere \(S^{n-1}\), and the subindex in the symbol \(\sim\) means that the constant behind this symbol depends only on the dimension \(n\). This exponential behaviour, as well as the metric properties of \(\mathcal{H}^{n}\), make the classical covering arguments fail. In consequence, it is unclear how to decompose the level set \(\{x\in\mathcal{H}^{n}\,:\,Mf(x)>\lambda\}\) in such way that the appropriate averages of \(w\) appear.
As in the euclidean case, from now on, given a non-negative measurable function \(w\) (a weight) defined on \(\mathcal{H}^{n}\), let \(w(E)=\int_{E}w(x)d\mu_{n}(x)\) for a measurable set \(E\subset\mathcal{H}^{n}\). On the other hand, given \(s>1\), let
\[M_{s}w=M(w^{s})^{1/s}.\]
Using this notation, our first main result is the following variant of the Fefferman-Stein inequality.
**Theorem 1.1**.: _For every weight \(w\geq 0\) we have that_
\[w\left(\{x\in\mathcal{H}^{n}\,:\,Mf(x)>\lambda\}\right)\leq C_{s,n}\frac{1}{ \lambda}\int_{\mathcal{H}^{n}}|f(x)|M_{s}w(x)d\mu_{n}(x)\]
_where the constant \(C_{s,n}\to+\infty\) when \(s\to 1\)._
This theorem is a generalization of the result of Stromberg [17], and as far as we know, it represents the first result for general weights in the hyperbolic setting. The reader may wonder if this result could hold for \(s=1\). We will show that this result is false in general if \(s=1\) (see Example 4.1 item 1 below). Moreover, our example shows that it is false, even if we put iterations of the maximal function in the right hand side.
In some sense, this is an evidence of a stronger singularity of the maximal function in the hyperbolic setting. In Section 4 we will show that there are non trivial weights satisfying the pointwise condition \(M_{s}(w)(x)\leq Cw(x)\) a.e \(x\in\mathcal{H}^{n}\). Then, for these weights it holds that the maximal function \(M\) satisfies the weak type \((1,1)\) respect to the measure \(wd\mu_{n}\).
_About the proof of Theorem 1.1._ For each \(r>0\), let \(A_{r}\) be the averaging operator
\[A_{r}f(x)=\frac{1}{\mu_{n}(B_{H}(x,r))}\int_{B_{H}(x,r)}|f(y)|\,d\mu_{n}(y).\]
Hence \(Mf(x)=\sup_{r\geq 0}A_{r}f(x)\). If \(M^{loc}(f)\) denotes the operator obtained if supremum is restricted to \(r\leq 2\), and \(M^{far}(f)\) denotes the operator obtained if the supremum is taken over all \(r\geq 2\), then
\[Mf(x)\leq M^{loc}f(x)+M^{far}f(x).\]
On the one hand, the operator \(M^{loc}\) behaves as in the Euclidean setting. The main difficulties appear in the estimations of \(M^{far}\). In [17], Stromberg uses a pointwise inequality obtained by Clerc and Stein in [3]. This pointwise inequality reduced the problem to get a good estimate for a convolution operator associated with a \(k\)-bi-invariant kernel \(\tau\), which in the case of hyperbolic setting is \(\tau(z,w)=(1+\mu_{n}(B(0,d(z,w))^{-1}\). A similar approach was used by Li and Lohoue in [9] to obtain sharp constants with respect to the dimension \(n\). However, Stromberg's argument strongly uses the homogeneity of the measure \(\mu_{n}\). So, it is not clear that one can apply a similar idea in the general case of any weight \(w\). This makes it necessary to look for a more flexible approach.
Our general strategy is based in the scheme used by Naor and Tao in [11], where the weak type \((1,1)\) of the centered maximal function on the discrete setting of rooted \(k\)-ary trees is obtained. The flexibility of this approach was shown in [13] and [14], where the authors used this approach to get weighted estimates in the same discrete setting. It is well known that regular trees can be thought as discrete models of the hyperbolic space. Moreover, this kind of heuristic was used by Cowling, Meda and Setti in [2], but in the other way round, that is, in this work the authors used Stromberg's approach to prove weak estimates in the setting of trees. A novelty of our paper is to bring ideas of the discrete setting to the continue hyperbolic context. Adapting this strategy to a continuous context requires overcoming certain obstacles. On the one hand, the combinatorial arguments used in the discrete setting of trees are not longer available, so they have to be replaced by geometrical arguments. In this sense, the
following estimate (Proposition 2.1)
\[\mu_{n}\Big{(}B_{H}(y,s)\cap B_{H}(x,r)\Big{)}\leq C_{n}e^{\frac{n-1}{2}(\,r+s-d _{n}(x,y)\,)}\]
is behind many estimates, as well as, some examples. It will also play a key role in the inequality
\[\int_{F}A_{r}(\chi_{E})(y)w(y)d\mu_{n}(y)\leq c_{s,n}\ e^{-(n-1)\frac{r}{s^{ \prime}+1}}w(F)^{\frac{1}{s^{\prime}+1}}M_{s}w(E)^{\frac{s^{\prime}}{s^{\prime }+1}},\]
that is very important to prove Theorem 1.1. In this inequality, \(E\) and \(F\) are measurable subsets of \(\mathcal{H}^{n}\), \(s>1\), \(s^{\prime}=\frac{s}{s-1}\), and \(r\) is a positive integer. On the other hand, in our setting the measure is not atomic. This leads us to make some estimations on some convenient averages of the original function instead of the function itself (see for instance Lemma 3.3).
### Weighted estimates in the hyperbolic space for \(p>1\)
In the Euclidean case, the weak and strong boundedness of the maximal operator \(M\) in weighted \(L^{p}\) spaces is completely characterized by the \(A_{p}\) condition defined in the seminal work of Muckenhoupt [10]:
\[\sup\left(\frac{1}{|B|}\int_{B}w\,dx\right)\left(\frac{1}{|B|}\int_{B}w^{- \frac{1}{p-1}}\,dx\right)^{p-1}<\infty, \tag{1.2}\]
where the supremum is taken over all the Euclidean balls. Different type of weighted inequalities were proved for measures such that the measure of the balls grows polynomially with respect to the radius (see for instance [5], [12], [15], [18], and [19]). However, the techniques used in those works can not be applied in our framework because of the geometric properties of \(\mathcal{H}^{n}\) and the exponential growth of the measures of balls with respect to the radius. Unweighted strong \((p,p)\) inequalities for the maximal function were proved for \(p>1\) by Clerc and Stein in [3]. Moreover, singular integral operators also were studied on symmetric spaces by Ionescu ([6, 7]).
Roughly speaking, in the hyperbolic spaces, the behaviour of the maximal function is a kind of combination of what happens in the Euclidean case and in the trees. More precisely, recall that we have defined the operators
\[M^{loc}f(x)=\sup_{0<r\leq 2}A_{r}f(x)\quad\text{and}\quad M^{far}f(x)=\sup_{2 <r}A_{r}f(x).\]
As we have already mentioned, the operator \(M^{loc}\) behaves as if it were defined in the Euclidean space. So, it is natural to expect that it boundedness could be controlled by a kind of "local \(A_{p}\) condition". We say that a weight \(w\in A_{p,loc}(\mathcal{H}^{n})\) if
\[\sup_{0<r(B)\leq 1}\left(\frac{1}{\mu_{n}(B)}\int_{B}w\mu_{n}\right)\left( \frac{1}{\mu_{n}(B)}\int_{B}w^{-\frac{1}{p-1}}\mu_{n}\right)^{p-1}<\infty.\]
The situation is very different for large values of the radius, when the hyperbolic structure comes into play. For instance, it is not difficult to show that the natural \(A_{p}\) condition is too strong for the boundedness of \(M^{far}\) in the hyperbolic setting. Indeed, in the Example 4.1 we show a weight for which the maximal function is bounded in all the \(L^{p}\)-spaces, but it does not belong to any (hyperbolic) \(A_{p}\) class. This suggests to follow a different approach. Inspired by the condition introduced in [14], in the case of \(k\)-ary trees, we are able to define sufficient conditions to obtain weak and strong estimates for the maximal function respect to a weight \(w\). Our main result in this direction is the following:
**Theorem 1.2**.: _Let \(p>1\) and \(w\) a weight. Suppose that_
* \(w\in A_{p,loc}(\mathcal{H}^{n})\)_._
* _There exist_ \(0<\beta<1\) _and_ \(\beta\leq\alpha<p\) _such that for every_ \(r\geq 1\) _we have_ (1.3) \[\int_{F}A_{r}(\chi_{E})(y)w(y)d\mu_{n}(y)\lesssim e^{(n-1)r(\beta-1)}w(E)^{ \frac{\alpha}{p}}w(F)^{1-\frac{\alpha}{p}},\] _for any pair of measurable subsets_ \(E,F\subseteq\mathcal{H}^{n}\)_._
_Then_
\[\|Mf\|_{L^{p,\infty}(w)}\lesssim\|f\|_{L^{p}(w)}. \tag{1.4}\]
_Furthermore, if \(\beta<\alpha\) then for each fixed \(\gamma\geq 0\) we have_
\[\sum_{j=1}^{\infty}j^{\gamma}\|A_{j}f\|_{L^{p}(w)}\lesssim\|f\|_{L^{p}(w)}. \tag{1.5}\]
_And therefore_
\[\|Mf\|_{L^{p}(w)} \lesssim\|f\|_{L^{p}(w)},\] \[\|Mf\|_{L^{p^{\prime}}(\sigma)} \lesssim\|f\|_{L^{p^{\prime}}(\sigma)},\]
_where \(\sigma=w^{1-p^{\prime}}\) and \(p^{\prime}=\frac{p}{p-1}\)._
**Remark 1.3**.: We observe that the estimate (1.5) in the previous theorem is stronger than the boundedness of the maximal function \(M^{far}(f)\). In particular, it implies that if an operator \(T\) satisfies the pointwise estimate
\[|Tf(x)|\lesssim M^{loc}(|f|)(x)+\sum_{j\geq 1}j^{\gamma}A_{j}(|f|)(x),\]
for some \(\gamma\geq 0\), then the requested conditions on the weight \(w\) in Theorem 1.2 will be sufficient condition for the boundedness of \(T\) in the space \(L^{p}(w)\) with \(p>1\). In particular, this generalized, in the hyperbolic setting, the unweighted estimates obtained by Clerc and Stein in [3, Thm. 2] for the maximal function.
**Remark 1.4**.: It is not clear whether or not the condition (1.3) for \(\alpha=\beta\) is a necessary condition for the weak type \((p,p)\) boundedness of \(M\) with respect to \(w\). However, the condition is sharp in the following sense: if \(\beta=\alpha\) we can construct a weight for which the weak type \((p,p)\) holds, but the strong type \((p,p)\) fails. Consequently, the weak type \((q,q)\) fails as well for every \(q<p\) (see Example 4.1 (2)). In particular, this shows that, unlike the classical case, in the hyperbolic context the weak \((p,p)\) inequality with respect to \(w\) of the maximal operator is not equivalent to the strong estimate for \(p>1\).
The condition (1.3) could be not easy to be checked. For this reason, we consider the following result which provides a more tractable condition. To simplify the statement, given a positive integer \(j\), let
\[\mathcal{C}_{j}=B(0,j)\setminus B(0,j-1).\]
Observe that the sets considered in the condition in (1.3) may have non-empty intersection with several different levels \(\mathcal{C}_{j}\). The condition in the following proposition studies the behavior of the weight at each level.
**Proposition 1.5**.: _Let \(1<p<\infty\), and let \(w\) be a weight such that there exists a real number \(\delta<1\), so that for every \(j,l,r\geq 1\) integers with the restriction \(|l-j|\leq r\), we have that_
\[w(\mathcal{C}_{l}\cap B(x,r))\lesssim e^{(n-1)\frac{r+l-j}{2}(p-\delta)}e^{(n- 1)r\delta}w(x),\quad\text{for a.e. }x\in\mathcal{C}_{j}. \tag{1.6}\]
_Then, the condition (1.3) in Theorem 1.2 holds with \(\beta=\alpha=\frac{p}{p-\delta+1}\)._
Combining Theorem 1.2, Remark 1.3 and Proposition 1.5 we obtain the following corollary.
**Corollary 1.6**.: _Let \(1\leq p<\infty\), and \(w\in A_{p,loc}(\mathcal{H}^{n})\) such that there exists a real number \(\delta<1\) such that for every \(j,l,r\geq 1\) integers with the restriction \(|l-j|\leq r\), we have that_
\[w(\mathcal{C}_{l}\cap B(x,r))\lesssim e^{(n-1)\frac{r+l-j}{2}(p-\delta)}e^{(n -1)r\delta}w(x),\quad\text{for a.e. }x\in\mathcal{C}_{j}.\]
_Then_
\[\|Mf\|_{L^{p,\infty}(w)}\lesssim\|f\|_{L^{p}(w)}.\]
_Furthermore, if \(p<q\) we have_
\[\|Tf\|_{L^{q}(w)}\lesssim\|f\|_{L^{q}(w)},\]
_for every operator \(T\) satisfying the pointwise estimate_
\[|Tf(x)|\lesssim M^{loc}(|f|)(x)+j^{\gamma}\sum_{j\geq 1}A_{j}(|f|)(x),\]
_for some \(\gamma\geq 0\)._
### Organization of the paper
This paper is organized as follow. In Section 2 we prove an estimate on the measure of the intersection of two hyperbolic balls. Section 3 is devoted to the proof of the main results of this paper. The proof of Theorem 1.1 is contained in Subsection 3.1, while the proof of Theorem 1.2 is contained in Subsection 3.2. The Section 3 concludes with the proof of Proposition 1.5. The Section 4 contains examples that clarify several points previously mentioned. Finally, the paper concludes with an appendix on the ball model of the hyperbolic space.
## 2. Geometric results
### The hyperbolic space
Although the precise realisation of hyperbolic space is not important for our purposes, for sake of concreteness, throughout this article we will consider the ball model. Recall that \(\mu_{n}\) denotes the volume measure, and by \(d_{n}\) we will denote the hyperbolic distance. A brief review of some basic facts about this model and its isometries is left to the Appendix A.
### Two results on the intersection of balls in the hyperbolic space
This subsection is devoted to prove the following two geometric results, which will be very important in the sequel.
**Proposition 2.1**.: _Let \(B_{H}(y,s)\) and \(B_{H}(x,r)\) be two balls in \(\mathcal{H}_{n}\). Then_
\[\mu_{n}\Big{(}B_{H}(y,s)\cap B_{H}(x,r)\Big{)}\leq C_{n}e^{\frac{n-1}{2}(\,r+ s-d_{n}(x,y)\,)},\]
_where \(C_{n}\) is a constant that only depends on the dimension._
Proof.: We can assume that \(B_{H}(y,s)\cap B_{H}(x,r)\neq\varnothing\). On the other hand, since the estimate is trivial if \(r\) and \(s\) are less than a fixed constant, we can also assume that \(r,s>2\). Without loss of generality, we can assume that \(y=0\) and \(x=(d,0,\dots,0)\) with \(d=d_{n}(x,y)\). Note that we can also assume that \(d>0\), otherwise the estimate is trivial. The geodesic passing through the centers is the segment
\[L=\{(t,0,\dots,0):\ t\in(-1,1)\}.\]
Since the balls are symmetric with respect to this geodesic line, the intersection is also symmetric with respect to this line. Let \(O_{L}(n-1)\) be the subgroup of the orthogonal group \(O(n)\) defined by
\[O_{L}(n)=\{A\in O(n):\text{A leaves invariant the geodesic line }L\},\]
then the intersection is invariant by the action of \(O_{L}(n-1)\). Moreover, the subgroup \(O_{L}(n-1)\) acts transitively in the intersection of the boundaries \(\partial B_{H}(0,s)\cap\partial B_{H}(x,r)\)
which turns out to be an \((n-2)\)-sphere. Let \(S\) denote this intersection of boundaries, and consider the point \(m\in L\) that satisfies
\[d_{n}(0,m)=\frac{s+d-r}{2}\quad\Longleftrightarrow\quad d_{n}(m,x)=\frac{r+d-s} {2}.\]
Since \(L\) is a symmetry axis for \(S\), the points in \(S\) are at the same distance to the point \(m\). Let \(\rho\) denote this distance. The volume of the ball of radius \(\rho\) can be estimated using the hyperbolic law of cosines. Take \(q\in S\), and consider the two dimensional hyperbolic (also linear) plane \(P\) containing \(q\) and \(L\). Let us restrict our attention to this hyperbolic plane (see Figure 1).
Since \(\angle(0,m,q)+\angle(q,m,x)=\pi\), one of them is greater or equal to \(\frac{\pi}{2}\). Suppose that the angle \(\theta=\angle(0,m,q)\) is greater than \(\frac{\pi}{2}\), and consider the geodesic triangle whose vertices are \(0\), \(m\) and \(q\) (see Figure 2).
Since \(\cos(\theta)\) is non-positive, we have that
\[\cosh(s) =\cosh\left(\frac{s+d-r}{2}\right)\cosh(\rho)-\sinh\left(\frac{s+d -r}{2}\right)\sinh(\rho)\cos(\theta)\] \[\geq\cosh\left(\frac{s+d-r}{2}\right)\cosh(\rho).\]
Figure 1. Intersection of the balls with the two dimensional plane \(P\).
Figure 2. Geodesic triangle.
Therefore, we get the following estimate
\[e^{\rho}\leq\cosh(\rho)\leq\frac{\cosh(s)}{\cosh\left(\frac{s+d-r}{2}\right)} \leq 2e^{\frac{s+r-d}{2}}.\]
By equation (1.1), we get that
\[\operatorname{Vol}\left(B_{H}(m,\rho)\right)=\Omega_{n}\int_{0}^{\rho}(\sinh t )^{n-1}dr\leq K_{n}e^{(n-1)\rho}\leq 2^{n}K_{n}e^{(n-1)\left(\frac{s+r-d}{2} \right)}. \tag{2.1}\]
Now, it is enough to prove that \(B_{H}(0,s)\cap B_{H}(x,r)\subseteq B_{H}(m,\rho)\). Since the intersection is an open-connected set, it is enough to prove that the boundary \(B_{H}(m,\rho)\) is not contained in the intersection. So, take \(p\in\partial B_{H}(m,\rho)\). By a continuity argument, we can assume that \(p\notin L\). Then, as before, consider the plane \(P\) generated by \(p\) and the geodesic \(L\). The geodesic \(L\) divide this plane in two parts. Let \(q\) be the unique point in \(P\cap S\) in the same half-plane as \(p\), and suppose that \(\theta_{p}=\angle(p,m,x)\) is greater or equal than \(\theta_{q}=\angle(q,m,x)\) (see Figure 3).
If \(t=d_{n}(x,p)\), since the cosine is decreasing in \((0,\pi)\) we get that
\[\cosh(t) =\cosh\left(\frac{r+d-s}{2}\right)\cosh(\rho)-\sinh\left(\frac{r+ d-s}{2}\right)\sinh(\rho)\cos(\theta_{p})\] \[\geq\cosh\left(\frac{r+d-s}{2}\right)\cosh(\rho)-\sinh\left(\frac{ r+d-s}{2}\right)\sinh(\rho)\cos(\theta_{q})\] \[=\cosh(r).\]
In consequence, \(t\geq r\) and therefore, the point \(t\notin B_{H}(x,r)\). If \(\angle(p,m,x)\) is smaller than \(\angle(q,m,x)\), it holds that \(\angle(p,m,0)\) is greater than \(\angle(q,m,0)\). Hence, the same argument, replacing the vertex \(x\) by the vertex \(0\) shows that \(t\notin B_{H}(0,s)\). This concludes the proof.
The following is a corollary of the proof of the previous lemma.
Figure 3. Comparison of triangles.
**Corollary 2.2**.: _Let \(B_{H}(0,s)\) and \(B_{H}(x,r)\) be two balls in \(\mathcal{H}_{n}\) such that their intersection has positive measure. If \(\rho_{0}=\frac{1}{2}(\,r+s-d_{n}(0,x)\,)\), then_
\[B_{H}(m,\rho_{0})\subseteq B_{H}(0,s)\cap B_{H}(x,r)\subseteq B_{H}(m,\rho_{0}+ 1),\]
_where \(m=\alpha x\), and \(\alpha=\tanh\Big{(}\frac{s+d-r}{2}\Big{)}\)._
## 3. Proof of Main results
First of all, we will prove the following arithmetical lemma, which is a slight generalization of a result contained in [14].
**Lemma 3.1**.: _Let \(1\leq p<\infty\), \(-p<\delta<1\),and \(\kappa>1\). Let the sequences of non-negative real numbers \(\{c_{j}\}_{j=0}^{\infty}\) and \(\{d_{l}\}_{l=0}^{\infty}\) satisfying_
\[\sum_{j=0}^{\infty}\kappa^{(p-\delta)j}c_{j}=A\quad\text{and}\quad\sum_{l=0}^ {\infty}\kappa^{l}d_{l}=B.\]
_Then, for every integer \(r\geq 1\) we have that_
\[\sum_{j,l\in\mathbb{N}\cup\{0\}}\min\Big{\{}\kappa^{\delta r}\kappa^{\frac{(l +j+r)(p-\delta)}{2}}c_{j},\kappa^{\frac{l+j+r}{2}}d_{l}\Big{\}}\leq c_{p, \delta,\kappa}\,\kappa^{\frac{p}{p-\delta+1}r}A^{\frac{1}{p-\delta+1}}B^{1- \frac{1}{p-\delta+1}}. \tag{3.1}\]
Proof.: To prove this inequality, let \(\rho\) be a real parameter to be chosen later, and argue as follows
\[\sum_{j,l\in\mathbb{N}\cup\{0\}}\min\Big{\{}\kappa^{\delta r}\kappa^{\frac{(l +j+r)(p-\delta)}{2}}c_{j},\kappa^{\frac{l+j+r}{2}}d_{l}\Big{\}}\] \[\leq\kappa^{\frac{p+\delta}{2}r}\sum_{\begin{subarray}{c}l,j\in \mathbb{N}\cup\{0\}\\ l<j+\rho\end{subarray}}\kappa^{\frac{(l+j)(p-\delta)}{2}}c_{j}+\kappa^{\frac{ r}{2}}\sum_{\begin{subarray}{c}l,j\in\mathbb{N}\cup\{0\}\\ l\geq j+\rho\end{subarray}}k^{\frac{l+j}{2}}d_{l}\] \[\lesssim\kappa^{\frac{p+\delta}{2}r}\sum_{j=0}^{\infty}\kappa^{ \frac{(j+\rho+j)(p-\delta)}{2}}c_{j}+\kappa^{\frac{r}{2}}\sum_{l=0}^{\infty} \kappa^{l-\frac{\rho}{2}}d_{l}\] \[=\kappa^{\frac{p+\delta}{2}r}\kappa^{\frac{\rho(p-\delta)}{2}} \sum_{j=0}^{\infty}k^{j(p-\delta)}c_{j}+\kappa^{\frac{r}{2}}k^{-\frac{\rho}{2} }\sum_{l=0}^{\infty}\kappa^{l}d_{l}\] \[=\kappa^{\frac{p+\delta}{2}r}\kappa^{\frac{\rho(p-\delta)}{2}}A+ \kappa^{\frac{r}{2}}\kappa^{-\frac{\rho}{2}}B.\]
Choosing \(\rho=\frac{2\log_{\kappa}\big{(}\frac{B}{A}\big{)}}{p-\delta+1}-\frac{(p+ \delta-1)r}{p-\delta+1}\), it follows that
\[\kappa^{\frac{p+\delta}{2}r}\kappa^{\frac{\rho(p-\delta)}{2}}A+\kappa^{\frac {r}{2}}\kappa^{-\frac{\rho}{2}}B\leq c_{p,\delta}\kappa^{\frac{p}{p-\delta+1}r }A^{\frac{1}{p-\delta+1}}B^{1-\frac{1}{p-\delta+1}},\]
which concludes the proof.
### Proof of Theorem 1.1
The first step consists on proving that Lemma 2.1 leads to the following result. This is a key point to push the scheme on the discrete cases in [11] or [13]. Recall that, given \(r\geq 0\), we denote by \(A_{r}\) the averaging operator
\[A_{r}f(x)=\frac{1}{\mu_{n}(B_{H}(x,r))}\int_{y\in B_{H}(x,r)}|f(x)|\,d\mu_{n}(x).\]
**Lemma 3.2**.: _Let \(E,F\) measurable sets of \(\mathcal{H}^{n}\), \(s>1\) and let \(r\) be a positive integer. Then_
\[\int_{F}A_{r}(\chi_{E})(y)w(y)d\mu_{n}(y)\leq c_{s,n}e^{-(n-1)\frac{r}{s^{ \prime}+1}}w(F)^{\frac{1}{s^{\prime}+1}}M_{s}w(E)^{\frac{s^{\prime}}{s^{\prime }+1}},\]
_where \(s^{\prime}=\frac{s}{s-1}\) and \(c_{s,n}\) is a constant depending on \(s\) and the dimension \(n\)._
Proof.: We divide the hyperbolic \(\mathcal{H}^{n}\) in level sets as follows
\[\mathcal{H}^{n}=\bigcup_{j=1}^{\infty}\mathcal{C}_{j},\]
where \(\mathcal{C}_{j}=\{x\in\mathcal{H}^{n}:j-1\leq d_{H}(0,x)<j\}\). Let \(E_{j}=E\cap\mathcal{C}_{j}\) and \(F_{\ell}=F\cap\mathcal{C}_{\ell}\). Hence, we can write
\[I:=\int_{F}A_{r}(\chi_{E})(y)w(y)d\mu_{n}(y)=\sum_{\ell,j\geq 0} \int_{F_{\ell}}A_{r}(\chi_{E_{j}})(y)w(y)d\mu_{n}(y). \tag{3.2}\]
Now, we will estimate the integrals
\[I_{j,\ell}:=\int_{F_{\ell}}A_{r}(\chi_{E_{j}})(y)w(y)d\mu_{n}(y)\]
in two different ways. On the one hand, given \(x\in E_{j}\), let
\[\Omega^{x}_{j,\ell}=\{y\in F_{\ell}:\ d(x,y)\leq r\}.\]
Then, by Lemma 2.1
\[\mu_{n}(\Omega^{x}_{j,\ell})\leq C_{n}e^{\frac{n-1}{2}(\ell+r-j)}.\]
Using this estimate, we obtain that
\[I_{j,\ell} =e^{-(n-1)r}\int_{F_{\ell}}\int_{B(y,r)}\chi_{E_{j}}(x)\,d\mu_{n} (x)w(y)d\mu_{n}(y)\] \[=e^{-(n-1)r}\int_{E_{j}}\int_{\Omega^{x}_{j,\ell}}w(y)d\mu_{n}(y) \,d\mu_{n}(x)\] \[=e^{-(n-1)r}\int_{E_{j}}\left(\int_{\Omega^{x}_{j,\ell}}d\mu_{n} \right)^{\frac{1}{s^{\prime}}}\left(\int_{B_{H}(x,r)}w^{s}(y)\,d\mu_{n}(y) \right)^{\frac{1}{s}}\,d\mu_{n}(x)\] \[\leq C_{n}e^{-(n-1)r}e^{\frac{n-1}{2s^{\prime}}(\ell+r-j)}\,e^{ \frac{(n-1)r}{s}}M_{s}(w)(E_{j}).\]
On the other hand, if \(y\in F_{\ell}\), let \(\Omega_{j,\ell}^{y}=\{x\in E_{j}:\ d(x,y)\leq r\}\). Then, by Lemma 2.1
\[I_{j,\ell} =e^{-(n-1)r}\int_{F_{\ell}}\int_{\Omega_{j,\ell}^{y}}d\mu_{n}(x)\,w (y)d\mu_{n}(y)\] \[\leq C_{n}e^{-(n-1)r}e^{\frac{n-1}{2}(j+r-\ell)}\,w(F_{\ell}).\]
In consequence
\[I_{j,\ell}\leq C_{n}e^{-(n-1)r}\min\Big{\{}e^{\frac{n-1}{2s^{\prime}}(\ell+r-j )}\,e^{\frac{(n-1)r}{s}}M_{s}(w)(E_{j}),e^{\frac{n-1}{2}(j+r-\ell)}\,w(F_{\ell })\Big{\}},\]
and
\[I\leq C_{n}e^{-(n-1)r}\sum_{|\ell-j|\leq r+2}\min\Big{\{}e^{\frac{n-1}{2s^{ \prime}}(\ell+r-j)}\,e^{\frac{(n-1)r}{s}}M_{s}(w)(E_{j}),e^{\frac{n-1}{2}(j+r -\ell)}\,w(F_{\ell})\Big{\}}.\]
Now, if we define \(c_{j}=\frac{M_{s}^{\circ}w(E_{j})}{e^{(n-1)\frac{j}{s^{\prime}}}}\) and \(d_{l}=\frac{w(F_{l})}{e^{(n-1)l}}\). We have that
\[\sum_{j=0}^{\infty}e^{(n-1)\frac{j}{s^{\prime}}}c_{j}=M_{s}^{\circ}w(E)\quad \text{and}\quad\sum_{j=0}^{\infty}e^{(n-1)l}d_{j}=w(F), \tag{3.3}\]
and
\[\min\Bigl{\{}e^{\frac{n-1}{2s^{\prime}}(\ell+r-j)}\,e^{\frac{(n-1 )r}{s}}M_{s}(w)(E_{j}),e^{\frac{n-1}{2}(j+r-\ell)}\,w(F_{\ell})\Bigr{\}}\] \[\qquad=\min\Big{\{}e^{\frac{(n-1)r}{s}}e^{(n-1)\frac{(l+j+r)}{2s^ {\prime}}}c_{j},e^{(n-1)\frac{l+j+r}{2}}d_{l}\Big{\}}\]
Then we have that
\[I\lesssim e^{-(n-1)r}\sum_{l,j\in\mathbb{N}\cup\{0\}}\min\left\{e^{\frac{(n-1 )r}{s}}e^{(n-1)\frac{(l+j+r)}{2s^{\prime}}}c_{j},e^{(n-1)\frac{l+j+r}{2}}d_{l }\right\}. \tag{3.4}\]
Now, if we choose \(\delta=\frac{1}{s}\) and \(p=1\) (then \(p-\delta=\frac{1}{s^{\prime}}\)) we have that
\[\min\left\{e^{\frac{(n-1)r}{s}}e^{(n-1)\frac{(l+j+r)}{2s^{\prime}}}c_{j},e^{( n-1)\frac{l+j+r}{2}}d_{l}\right\}\]
is equal to
\[\min\left\{e^{(n-1)\delta r}e^{(n-1)\frac{(l+j+r)(p-\delta)}{2}}c_{j},e^{(n-1) \frac{l+j+r}{2}}d_{l}\right\}.\]
Therefore, if \(\kappa=e^{n-1}\) and we take into account (3.3), applying Lemma 3.1 in (3.4) we get
\[I\lesssim e^{-(n-1)\frac{r}{s^{\prime}+1}}w(F)^{\frac{1}{s^{\prime}+1}}M_{s}w( E)^{\frac{s^{\prime}}{s^{\prime}+1}}.\]
We can use Lemma 3.2 to obtain a distributional estimate on \(A_{r}\).
**Lemma 3.3**.: _Let \(r\geq 1\) and \(\lambda>0\). Then_
\[w\left(\left\{A_{r}(A_{1}f)\geq\lambda\right\}\right)\lesssim c_{s}\sum_{k=0}^{r} \left(\frac{e^{(n-1)k}}{e^{(n-1)r}}\right)^{\frac{1}{2s^{\prime}}}e^{(n-1)k}M _{s}w\left(\left\{|A_{2}f|\geq\eta e^{(n-1)k}\right\}\right),\]
_where \(c_{s}\) depends only on \(s\) and \(c_{s}\to\infty\) when \(s\to 1\)._
Proof of Lemma 3.3.: Let \(f_{1}=A_{1}f\). We bound
\[f_{1}\leq\frac{1}{e}+\sum_{k=0}^{r}e^{(n-1)k}\chi_{E_{k}}+f_{1}\chi_{\{f_{1} \geq\frac{1}{2}e^{(n-1)r}\}}, \tag{3.5}\]
where \(E_{k}\) is the sublevel set
\[E_{k}=\left\{e^{(n-1)(k-1)}\leq f_{1}<e^{(n-1)k}\right\}. \tag{3.6}\]
Hence
\[A_{r}f_{1}\leq\frac{1}{e}+\sum_{k=0}^{r}e^{(n-1)k}A_{r}\left(\chi_{E_{k}} \right)+A_{r}\left(f_{1}\chi_{\{f_{1}\geq\frac{1}{2}e^{(n-1)r}\}}\right). \tag{3.7}\]
Given any \(\lambda>0\)
\[w\left(\left\{A_{r}\left(f_{1}\chi_{\{f_{1}\geq e^{(n-1)r}\}} \right)>\lambda\right\}\right) \leq w\left(\left\{A_{r}\left(f_{1}\chi_{\{f_{1}\geq e^{(n-1)r}\} }\right)\neq 0\right\}\right)\] \[\leq w\left(\left\{x:B_{H}(r,x)\cap\{f_{1}\geq e^{(n-1)r}\}\neq \varnothing\right\}\right).\]
Take \(x\) such that \(B_{H}(x,r)\cap\{f_{1}\geq e^{(n-1)r}\neq\varnothing\), and let \(y\) be an element of this intersection. It is not difficult to see that
\[B_{H}(y,1)\subseteq B_{H}(x,r+1)\cap\big{\{}f_{2}\geq ce^{(n-1)r}\big{\}},\]
where \(f_{2}=A_{2}f\) and \(c_{0}=\frac{\mu_{n}(B(0,1))}{\mu_{n}(B(0,2))}\). Therefore
\[w\left(\left\{x:B_{H}(r,x)\cap\{f_{1}\geq e^{(n-1)r}\}\neq \varnothing\right\}\right) \leq w\Big{(}\Big{\{}A_{r+1}\left(\chi_{\{f_{2}\geq ce^{(n-1)r}\} }\right)>\frac{1}{c_{1}e^{(n-1)r}}\Big{\}}\Big{)}\] \[\leq c_{1}e^{(n-1)r}M(w)\left(\chi_{\{f_{2}\geq ce^{(n-1)r}\}} \right).\]
On the other hand, let \(\beta\in(0,1)\) that will be chosen later. Note that if
\[\sum_{k=0}^{r}e^{(n-1)k}A_{r}\left(\chi_{E_{k}}\right)\geq\frac{1}{e},\]
then we necessarily have some \(k\in\mathbb{N}\) such that \(1\leq k\leq r\) for which
\[A_{r}\left(\chi_{E_{k}}\right)\geq\frac{e^{(n-1)\beta}-1}{e^{(n-1)(k+2)}}\left( \frac{e^{(n-1)k}}{e^{(n-1)r}}\right)^{\beta}.\]
Indeed, otherwise we have that
\[\frac{1}{e} \leq\sum_{k=0}^{r}e^{(n-1)k}A_{r}\left(\chi_{E_{k}}\right)<\frac{e^ {(n-1)\beta}-1}{e^{(n-1)(\beta r+2)}}\sum_{k=0}^{r}e^{(n-1)\beta k}\] \[=\frac{e^{(n-1)\beta}-1}{e^{(n-1)(\beta r+2)}}\ \frac{e^{(n-1)\beta(r+1)}-1}{e^{(n-1)\beta}-1}<\frac{1}{e},\]
which is a contradiction. Thus
\[w\left(A_{r}f_{1}\geq 1\right)\leq\sum_{k=0}^{r}w(F_{k})+c_{1}e^{(n-1)r}M(w) \left(\chi_{\{f_{2}\geq ce^{(n-1)r}\}}\right),\]
where
\[F_{k}=\left\{A_{r}\left(\chi_{E_{k}}\right)\geq\frac{e^{(n-1)\beta}-1}{e^{(n-1 )(k+2)}}\left(\frac{e^{(n-1)k}}{e^{(n-1)r}}\right)^{\beta}\right\}.\]
Note that \(F_{k}\) has finite measure, and
\[w(F_{k})\frac{e^{(n-1)\beta}-1}{e^{(n-1)(k+2)}}\left(\frac{e^{(n-1)k}}{e^{(n-1 )r}}\right)^{\beta}\leq\int_{F_{k}}A_{r}(\chi_{E_{k}})wd\mu_{n}(x).\]
On the other hand, by Lemma 3.2,
\[\int_{F_{k}}A_{r}(\chi_{E_{k}})wd\mu_{n}(x)\leq c_{s}e^{-(n-1)\frac{r}{s^{ \prime}+1}}w(F_{k})^{\frac{1}{s^{\prime}+1}}M_{s}w(E_{k})^{\frac{s^{\prime}}{ s^{\prime}+1}}.\]
Hence
\[w(F_{k})\frac{e^{(n-1)\beta}-1}{e^{(n-1)(k+2)}}\left(\frac{e^{(n-1)k}}{e^{(n- 1)r}}\right)^{\beta}\leq c_{s}e^{-(n-1)\frac{r}{s^{\prime}+1}}w(F_{n})^{\frac{ 1}{s^{\prime}+1}}M_{s}w(E_{n})^{\frac{s^{\prime}}{s^{\prime}+1}}.\]
So, choosing \(\beta=\frac{1}{2(s^{\prime}+1)}\) we have that
\[w(F_{k}) \leq c_{s}e^{-(n-1)\frac{r}{2s^{\prime}}}e^{\frac{(n-1)k}{2s^{ \prime}}}e^{(n-1)k}M_{s}w(E_{n})\] \[\leq c_{s}\left(\frac{e^{(n-1)k}}{e^{(n-1)r}}\right)^{\frac{1}{2s ^{\prime}}}e^{(n-1)k}M_{s}w\left(\left\{f_{1}\geq e^{(n-1)(k-1)}\right\} \right).\]
Therefore
\[w(\{A_{r}f_{1}\geq 1\}) \leq c_{s}\sum_{k=0}^{r}c_{s}\left(\frac{e^{(n-1)k}}{e^{(n-1)r}} \right)^{\frac{1}{2s^{\prime}}}e^{(n-1)k}M_{s}w\left(\left\{f_{1}\geq e^{(n-1 )(k-1)}\right\}\right) \tag{3.8}\] \[+c_{1}e^{(n-1)r}M(w)\left(\chi_{\{f_{2}\geq ce^{(n-1)r}\}}\right).\]
So, there exists \(\eta>0\) depending only on the dimension such that
\[w(\{A_{r}f_{1}\geq 1\})\leq\tilde{c}_{s}\sum_{k=0}^{r}\left(\frac{e^{(n-1)k}}{e^{ (n-1)r}}\right)^{\frac{1}{2s^{\prime}}}e^{(n-1)k}M_{s}w\left(\left\{f_{2}\geq \eta e^{(n-1)(k-1)}\right\}\right).\]
Indeed, note that in the right-hand side of (3.8), the second term is dominated by the last term of the sum. This yields the desired conclusion.
Combining the ingredients above we are in position to settle Theorem 1.1.
Proof of Theorem 1.1.: By the discussion in the introduction we only need to argue for \(M^{far}(f)(x)\). Then, by Lemma 3.3 implies that
\[w\Big{(}M^{far}f \geq\lambda\Big{)}\leq w\left(M^{far}f_{1}\geq\lambda\right)\] \[\leq\sum_{r=1}^{\infty}w\left(A_{r}f_{1}\geq\lambda\right)\] \[=\tilde{c}_{s}\sum_{r=0}^{\infty}\sum_{k=0}^{r}\left(\frac{e^{(n-1 )k}}{e^{(n-1)r}}\right)^{\frac{1}{2s^{\prime}}}e^{(n-1)k}M_{s}w\left(\left\{f_ {2}\geq e^{(n-1)(k-1)}\eta\lambda\right\}\right)\] \[=\tilde{c}_{s}\int_{\mathcal{H}_{n}}\sum_{r=0}^{\infty}\sum_{k=0} ^{r}\left(\frac{e^{(n-1)k}}{e^{(n-1)r}}\right)^{\frac{1}{2s^{\prime}}}e^{(n-1 )k}\chi_{\{f_{2}\geq e^{(n-1)(k-1)}\}\eta\lambda\}M_{s}w(x)d\mu_{n}(x)\] \[=\tilde{c}_{s}\int_{\mathcal{H}_{n}}\sum_{k=0}^{\infty}\sum_{r=k} ^{\infty}\left(\frac{e^{(n-1)k}}{e^{(n-1)r}}\right)^{\frac{1}{2s^{\prime}}}e^{ (n-1)k}\chi_{\{f_{2}\geq e^{(n-1)(k-1)}\}\eta\lambda\}M_{s}w(x)d\mu_{n}(x)\] \[=\tilde{c}_{s}\int_{\mathcal{H}_{n}}\sum_{k=0}^{\infty}e^{(n-1)k} \chi_{\{f_{2}\geq e^{(n-1)(k-1)}\}\eta\lambda\}M_{s}w(x)d\mu_{n}(x)\] \[\leq\frac{\hat{c}_{s}}{\eta\lambda}\int_{\mathcal{H}_{n}}f_{2}(x) M_{s}w(x)d\mu_{n}(x)\] \[=\frac{\hat{c}_{s}}{\eta\lambda}\int_{\mathcal{H}_{n}}f(x)A_{2}(M _{s}w)(x)d\mu_{n}(x).\]
Now, if \(w\) is identically \(1\) we have \(A_{2}(M_{s}w)(x)=1\) and we are done. In particular, this recovers the Stromberg's weak type \((1,1)\) estimate. If \(w\) is not constant, we claim that
\[A_{2}\left((M_{s}w)\right)(x)\lesssim_{s}M_{s}w(x).\]
Indeed,
\[\frac{1}{\mu_{n}(B(x,2))}\int_{B(x,2)}M_{s}w(y)d\mu_{n}(y) \leq\frac{1}{\mu_{n}(B(x,2))}\int_{B(x,2)}M(w^{s}\chi_{B(x,4)}(y) )^{\frac{1}{s}}d\mu_{n}(y)\] \[+\frac{1}{\mu_{n}(B(x,2))}\int_{B(x,2)}M(w^{s}\chi_{(B(x,4))^{c}} (y))^{\frac{1}{s}}d\mu_{n}(y).\]
The second term in the last line can be controlled by \(cM_{s}(w)(x)\) because
\[M(w^{s}\chi_{(B(x,4))^{c}}(y))^{\frac{1}{s}}\sim M(w^{s}\chi_{(B(x,4))^{c}}(x ))^{\frac{1}{s}},\]
for every \(y\in B(x,2)\). Using Kolmogorov's inequality and the weak type \((1,1)\) of \(M\) the first term can be estimate by \(c_{\beta}(A_{4}(w^{s})(x))^{\frac{1}{s}}\) and the claim follows. This completes the proof in the general case.
### Proof of Theorem 1.2
The proof of Theorem 1.2 follows the same ideas of the proof of Theorem 1.1. in [14]. First, the hypothesis \(w\in A_{p,loc}(\mathcal{H}^{n})\) implies the estimates for \(M^{loc}\) by standard arguments as in the classical setting. On the other hand, the arguments used to prove that Lemma 3.2 implies Lemma 3.3 can be used to prove that the hypothesis in Theorem 1.2 implies that
\[w\left(\{A_{r}(A_{1}f)\geq\lambda\}\right)\lesssim c_{s}\sum_{k=0}^{r}\left( \frac{e^{(n-1)k}}{e^{(n-1)r}}\right)^{\frac{1-\beta}{2}\frac{p}{\alpha}}e^{(n- 1)\beta\frac{p}{\alpha}k}w\left(\left\{|A_{2}f|\geq\eta e^{(n-1)k}\lambda \right\}\right). \tag{3.9}\]
This inequality shows that the case \(\beta<\alpha\) produces a better estimate than the case \(\beta=\alpha\). First of all, assume that we are in the worst case \(\beta=\alpha\). Arguing as in the proof of Theorem 1.1 we get
\[w\left(\left\{M^{far}f(x)\geq\lambda\right\}\right)\lesssim\frac{c}{\lambda^{ p}}\int_{\mathcal{H}^{n}}|A_{2}(f)(x)|^{p}w(x)d\mu_{n}(x)dx.\]
Since \(|A_{2}(f)(x)|\leq M^{loc}f(x)\) and \(w\in A_{p,loc}(\mathcal{H}^{n})\), paying a constant we can eliminate \(A_{2}\) in the right hand side of the previous estimate, and the proof is complete in this case. If we assume that \(\beta<\alpha\), then by (3.9) we have that
\[\|A_{r}f\|_{L^{p}(w)}^{p} =p\int_{0}^{\infty}\lambda^{p-1}w\left(A_{r}f\geq\lambda\right)d\lambda\] \[\lesssim\sum_{k=0}^{r}\left(\frac{e^{(n-1)k}}{e^{(n-1)r}}\right)^ {\frac{1-\beta}{2}\frac{p}{\alpha}}e^{(n-1)\beta\frac{p}{\alpha}k}\int_{0}^{ \infty}\lambda^{p-1}w\left(\left\{|A_{2}f|\geq\eta e^{(n-1)k}\lambda\right\}\right)\] \[=\sum_{k=0}^{r}\left(\frac{e^{(n-1)k}}{e^{(n-1)r}}\right)^{\frac {1-\beta}{2}\frac{p}{\alpha}}e^{(n-1)\beta\frac{p}{\alpha}k}e^{-(n-1)kp}\|A_{ 2}f\|_{L^{p}(w)}^{p}\] \[\lesssim e^{(n-1)rp(\frac{\beta}{\alpha}-1)}\|A_{2}f\|_{L^{p}(w)} ^{p}.\]
Since \(w\in A_{p,loc}(\mathcal{H}^{n})\) we can eliminate \(A_{2}\) in the last norm, and taking into account that \(\frac{\beta}{\alpha}-1<0\), we have that
\[\sum_{r=1}^{\infty}r^{\gamma}\|A_{r}f\|_{L^{p}(w)}\lesssim\sum_{r=1}^{\infty} r^{\gamma}e^{(n-1)rp(\frac{\beta}{\alpha}-1)}\|f\|_{L^{p}(w)}\sim_{\gamma, \alpha,\beta,p}\|f\|_{L^{p}(w)}.\]
This leads to (1.5). From (1.5) and the fact that \(\sum_{j=1}^{\infty}A_{j}(f)\) is self-adjoint (\(\gamma=0\)) we obtain the boundedness of \(M^{far}\) in the spaces \(L^{p}(w)\) and \(L^{p^{\prime}}(\sigma)\). Moreover, since \(w\in A_{p,loc}(\mathcal{H}^{n})\) and therefore \(\sigma\) is in \(A_{p^{\prime},loc}(\mathcal{H}^{n})\) we have the same inequalities for \(M^{loc}\), and as a consequence we obtain
\[\|Mf\|_{L^{p}(w)} \lesssim\|f\|_{L^{p}(w)}\] \[\|Mf\|_{L^{p^{\prime}}(\sigma)} \lesssim\|f\|_{L^{p^{\prime}}(\sigma)}\]
This ends the proof of the Theorem.
### Proof of Proposition 1.5
The proof follows similar ideas as Lemma 3.2.
Proof of Proposition 1.5.: Given \(E,F\) subsets in \(\mathcal{H}^{n}\), we should prove that
\[\int_{F}A_{r}(\chi_{E})(y)w(y)d\mu_{n}(y)\lesssim e^{(n-1)r\left(\frac{p}{p- \delta+1}-1\right)}w(E)^{\frac{1}{p-\delta+1}}w(F)^{1-\frac{1}{p-\delta+1}}. \tag{3.10}\]
Using the same notation as in the Lemma 3.2, we have
\[I_{j,\ell}:=\int_{F_{\ell}}A_{r}(\chi_{E_{j}})(y)w(y)d\mu_{n}(y).\]
Given \(x\in E_{j}\), let \(\Omega_{j,\ell}^{x}=\{y\in F_{\ell}:\ d(x,y)\leq r\}\). Then, by condition (1.6)
\[w(\Omega_{j,\ell}^{x})\leq C_{n}e^{(n-1)\frac{r+l-j}{2}(p-\delta)}e^{(n-1)r \delta}w(x).\]
Therefore,
\[I_{j,\ell} =e^{-(n-1)r}\int_{F_{\ell}}\int_{B(y,r)}\chi_{E_{j}}(x)\,d\mu(x)w (y)d\mu_{n}(y)\] \[=e^{-(n-1)r}\int_{E_{j}}\int_{\Omega_{j,\ell}^{x}}w(y)d\mu_{n}(y) \,d\mu_{n}(x)\] \[\lesssim e^{-(n-1)r}e^{(n-1)\frac{r+l-j}{2}(p-\delta)}e^{(n-1)r \delta}w(E_{j}).\]
On the other hand, if \(y\in F_{\ell}\), let \(\Omega_{j,\ell}^{y}=\{x\in E_{j}:\ d(x,y)\leq r\}\). Then, by Lemma 2.1
\[I_{j,\ell} =e^{-(n-1)r}\int_{F_{\ell}}\int_{\Omega_{j,\ell}^{y}}d\mu_{n}(x) \,w(y)d\mu_{n}(y)\] \[\leq C_{n}e^{-(n-1)r}e^{\frac{n-1}{2}(j+r-\ell)}\,w(F_{\ell}).\]
So,
\[I_{j,\ell}\leq C_{n}e^{-(n-1)r}\min\Big{\{}e^{(n-1)\frac{r+l-j}{2}(p-\delta)}e ^{(n-1)r\delta}w(E_{j}),e^{\frac{n-1}{2}(j+r-\ell)}\,w(F_{\ell})\Big{\}}.\]
From now on, we can follow the same steps as in the proof of Lemma 3.2, and using Lemma 3.1 we obtain (3.10).
## 4. Examples
In this last section we show several examples to clarify several points previously mentioned. We omit details since the examples follow from continue variants of Theorem 1.3 in [14].
Let \(-\infty<\gamma\leq 1\), we denote
\[w_{\gamma}(x)=\frac{1}{\big{(}\,1+\mu_{n}(\,B(0,d_{H}(0,x)\,)\,)\,\big{)}^{ \gamma}}.\]
**Examples 4.1**.:
1. If \(0\leq\gamma\leq 1\), then \[M(w_{\gamma})(x)\lesssim w_{\gamma}(x)\] In particular if \(\gamma<1\) taking \(s>1\) such that \(\gamma s\leq 1\) we have that \[M_{s}(w_{\gamma})(x)\lesssim w_{\gamma}(x)\] Therefore there are non trivial weights satisfying \(M_{s}(w)\lesssim w\). On the other hand, \(Mw_{1}(x)\lesssim w_{1}(x)\). However, the weak type \((1,1)\) of \(M\) with respect to \(w_{1}\) fails. In fact, taking \(f_{k}(x)=\chi_{\mathcal{C}_{k}}(x)\) for \(k\) big, it is not difficult to show that \(w_{1}\{x:M(f_{k})(x)>1/2\}\geq k\) and the \(L^{1}(w_{1})\)-norm of \(f_{k}\) is uniformly bounded. In particular, this example shows that in Theorem 1.1 is not possible to put \(s=1\). In fact, it is not possible to put any iteration \((M^{m}(f)=M(M^{m-1}f))\) of \(M\) for any fixed natural number \(m\).
2. Let \(p>1\). Then \(w_{1-p}(x)\) satisfies the hypothesis of Corollary 1.6 and therefore \[\|Mf\|_{L^{p},\infty(w_{1-p})}\lesssim\|f\|_{L^{p}(w_{1-p})}\] holds. Nevertheless, \(\|Mf\|_{L^{p}(w_{1-p})}\lesssim\|f\|_{L^{p}(w_{1-p})}\) does not. This can be seen by considering the function \(f=\chi_{B(0,1)}\), and taking into account that \(w\simeq(M\chi_{B(0,1)})^{1-p}\).
3. Fixed \(\gamma\in(0,1)\). We have seen in the item 1 that the maximal function satisfies a weak type \((1,1)\) inequality for this weight. In particular, for every \(q>1\), \[\|Mf\|_{L^{q}(w_{\gamma})}\lesssim\|f\|_{L^{q}(w_{\gamma})}.\] However, it is not difficult to see that, for any fixed \(p>1\), it holds that \[\sup_{r>0}\frac{1}{\mu_{n}(B(0,r))}\int_{B(0,r)}w_{\gamma}\left(\frac{1}{\mu_ {n}(B(0,r))}\int_{B(0,r)}w_{\gamma}^{-\frac{1}{p-1}}\right)^{p-1}=\infty.\] This example shows that boundedness of \(M\) does not imply the natural condition \(A_{p}\) for any \(p>1\) in this setting. In the Euclidean setting in the context of a general measure \(\mu\) an example in this line was also obtained by Lerner in [8].
## Appendix A The ball model of the hyperbolic space
Let \(\mathcal{B}_{n}=\{x\in\mathbb{R}^{n}:\ \|x\|<1\}\), where \(\|\cdot\|\) denotes the euclidean norm in \(\mathbb{R}^{n}\). In this ball we will consider the following Riemannian structure
\[ds_{x}^{2}(v)=\frac{2\|v\|^{2}}{(1-\|x\|^{2})^{2}}.\]
The hyperbolic distance in this model can be computed by
\[d_{n}(x,y)=\operatorname{arctanh}\left(\frac{\|x-y\|}{(1-2\left\langle\,x,y \,\right\rangle+\|x\|^{2}\|y\|^{2})^{\frac{1}{2}}}\right).\]
The group of isometries \(\mathcal{I}(\mathcal{B}_{n})\) in this representation coincides with the group of conformal diffeomorphisms from \(\mathcal{B}_{n}\) onto itself. For \(n=2\), we can identify \(\mathbb{R}^{2}\) with \(\mathbb{C}\), and this group is the one generated by:
* Rotations: \(z\mapsto e^{it}z\), \(t\in\mathbb{R}\).
* Mobius maps: \(z\mapsto\frac{z-w}{1-\bar{w}z}\).
* Conjugation: \(z\mapsto\overline{z}\).
For dimension \(n>2\), recall that, by Liouville's theorem, every conformal map between two domains of \(\mathbb{R}^{n}\) has the form
\[x\mapsto\lambda A\circ\iota_{x_{0},\alpha}(x)+b\]
where \(\lambda>0\), \(b\in\mathbb{R}^{n}\), \(A\) belongs to the orthogonal group \(O(n)\), and for \(x_{0}\in\mathbb{R}^{n}\), \(\alpha\in\mathbb{R}\)
\[\iota_{x_{0},\alpha}(x)=\alpha\frac{x-x_{0}}{\|x-x_{0}\|^{2}}+x_{0}.\]
Note that, when \(\alpha>0\), the maps \(\iota_{x_{0},\alpha}\) correspond to a reflection with respect to the sphere
\[S^{n-1}(x_{0},\alpha)=\{x\in\mathbb{R}^{n}:\ \|x-x_{0}\|^{2}=\alpha\}.\]
If \(\alpha<0\), it is a composition of the inversion with respect to the sphere \(S^{n-1}(x_{0},-\alpha)\) and the symmetry centered at \(x_{0}\). Using this result, we get that the group \(\mathcal{I}(\mathcal{B}_{n})\) consists of the maps of the form
\[A\circ\theta\]
where \(A\) belongs to the orthogonal group \(O(n)\) and \(\theta\) is either the identity or an inversion with respect to a sphere that intersect orthogonally \(\partial\mathcal{B}_{n}\). Recall that we say that two spheres \(S_{1}\) and \(S_{2}\) intersects orthogonally if for every \(p\in S_{1}\cap S_{2}\)
\[(T_{p}S_{1})^{\perp}\perp(T_{p}S_{2})^{\perp}.\]
**Remark A.1**.: This representation is also true for \(n=2\). Indeed, on the one hand, the rotations as well as the conjugation belongs to \(O(2)\). On the other hand, given \(\alpha\in\mathbb{C}\)
such that \(|\alpha|<1\), the circle of center \(\alpha^{-1}\) and squared radius \(|\alpha|^{-2}-1\) is orthogonal to \(\partial\mathcal{B}_{2}\), and if \(\iota\) denotes the inversion with respect to this circle then
\[\iota(z)=\frac{\overline{z}-w}{1-\bar{w}\overline{z}}.\]
In this model, the \(r\)-dimensional hyperbolic subspaces that contains the origin are precisely the intersection the \(r\)-dimensional linear subspaces of \(\mathbb{R}^{d}\) with \(\mathcal{B}_{n}\). The other ones, are images of these ones by isometries. So, they are \(r\)-dimensional spheres orthogonal to \(\partial\mathcal{B}_{n}\). The orthogonality in this case, as before, is defined in the natural way in terms of the orthogonal complements of the corresponding tangent spaces.
| この作業では、 hiperboloid 空間の設定で重み理論を開発します。始めは、中心化されたHardy-Littlewood最大関数のWell-Known Endpoint Fefferman-Steininequalityの変形です。この不等式は、 hyperboloid 領域で、Str\"ombergが"Weak type L1 estimates for maximal functions on noncompactsymmetric spaces"に記載した弱$(1,1)$評価を一般化しています。Ann. of Math. 114 (1981) に記載されているStr\"ombergの回答では、SteinとWaingerが"Problems in harmonic analysis related to curvature"に記載した質問に応えています。Bull. Amer. Math. Soc. 84 (1978) に記載されています。このアプローチは、NaorとTaoが"Random martingales and localization of maximal inequalities"に記載した、規則的な木を整数値設定で扱うための技術と幾 |
2304.03226 | Reverse-time analysis uncovers universality classes in directional
biological dynamics | Mesoscopic bio-systems typically evolve towards functionally important target
states, such as cell-cycle checkpoints or decision boundaries for the release
of specific behaviors. For the data-driven inference of the underlying
directional out-of-equilibrium dynamics, we here develop a theory of target
state aligned (TSA) ensembles. Target state alignment allows to analyze
directional dynamics in reverse time, starting from the final conditions of the
forward process. Knowledge about the initial conditions of the forward process
is not required for the analysis. Our theory reveals whether and when such a
system can be represented by a single, effective stochastic equation of motion.
We show how, in these effective dynamics, genuine biological forces can be
separated from spurious forces, which invariably arise from target state
alignment. We apply our inference scheme to the example of cytokinetic ring
constriction, and derive the universal low-noise and short-term behavior of TSA
ensembles. Our theory establishes a transparent mathematical foundation for the
analysis and inference of directed biological dynamics by target state
alignment. | Nicolas Lenner, Stephan Eule, Jörg Großhans, Fred Wolf | 2023-04-06T17:00:08 | http://arxiv.org/abs/2304.03226v1 | # Reverse-time analysis uncovers universality classes in directional biological dynamics
###### Abstract
Mesoscopic bio-systems typically evolve towards functionally important target states, such as cell-cycle checkpoints or decision boundaries for the release of specific behaviors. For the data-driven inference of the underlying directional out-of-equilibrium dynamics, we here develop a theory of target state aligned (TSA) ensembles. Target state alignment allows to analyze directional dynamics in reverse time, starting from the final conditions of the forward process. Knowledge about the initial conditions of the forward process is not required for the analysis. Our theory reveals whether and when such a system can be represented by a single, effective stochastic equation of motion. We show how, in these effective dynamics, genuine biological forces can be separated from spurious forces, which invariably arise from target state alignment. We apply our inference scheme to the example of cytokinetic ring constriction, and derive the universal low-noise and short-term behavior of TSA ensembles. Our theory establishes a transparent mathematical foundation for the analysis and inference of directed biological dynamics by target state alignment.
+
Footnote †: Fred.Wolf@ds.mpg.de
+
Footnote †: Fred.Wolf@ds.mpg.de
## I Introduction
The dynamics of biological systems are often directed towards functionally important target states. Prominent examples of target states (TS) are cell cycle check-points [1; 2; 3; 4; 5; 6; 7], branch-points in cell fate determination [8; 9; 10; 11; 12; 13], or discrete behavioral decisions from the continuous accumulation of sensory evidence [14; 15; 16; 17; 18; 19; 20] (Fig. 1). Such TS-directed dynamics are crucial for the functioning of biological systems on various scales and considerable effort has been dedicated to clarify their phenomenology and mechanistic underpinnings [11; 3; 18]. For many instances, this research revealed that TS are not approached deterministically [14; 7; 13]. Rather, due to internal and external noise sources, many biological systems generate a diverse set of trajectories all homing in on the same TS. This richness of behavior reflects their capability to buffer noise, achieve TS-arrival robustly and is best captured by a statistical ensemble approach. Biological systems with TS-directed dynamics are in general out-of-equilibrium and non-stationary and thus require large sample sizes for accurate characterization [21; 22; 23; 24; 25; 26; 27; 28]. Fortunately, technological progress in recording from cells, systems and entire organisms [29; 30] and in multi channel high-speed data acquisition [31; 32; 33] has opened exciting prospects to efficiently generate such extensive datasets.
Ultimately, the large-scale datasets that can now be acquired in many biological systems should enable understanding the mechanisms of TS-convergence through the data-driven inference of stochastic dynamical models. However, beyond the scale of individual biomolecules [34; 35; 36; 37; 38; 39; 40; 41], the power and limitations of model inference from out-of equilibrium non-stationary ensembles are not well understood. Key to the analysis of such ensembles is the seemingly innocent step of trajectory temporal alignment. In principle, one might choose to either align individual trajectories at process onset or at the time of TS-arrival. In practice, however, these two reference times often cannot be defined with equal certainty. Firstly, stochastic behavior makes it hard in general to exactly determine the onset time of a specific behavior for each individual sample. Secondly, complex biological systems typically assume the TS-directed dynamics only during a particular functional stage and the underlying mechanisms gradually fade-in as this stage is entered. Thirdly and in particular during fade-in and onset, systems may exhibit substantial sample-to-sample heterogeneity. By contrast, near TS-arrival one expects the relevant dynamical behavior to be expressed most purely and most uniformly. In addition TSs, such as daughter cell separation in cytokinesis, often are unique and can be localized in time quite precisely. For all of these reasons, analyzing TS-directed dynamics not in forward time, measure from process onset, but in reverse time, measured from TS-arrival, may offer many advantages in terms of temporal precision, ensemble purity and thus quality of inference. So far, however, TS-alignment (TSA) has not been employed frequently and - perhaps for this reason - its theoretical foundations have not been systematically analyzed. Below we first expose a couple of surprising intricacies mathematically associated with the very pro
cedure of TSA. We then develop a theory for analyzing directed biological dynamics in reverse time and demonstrate the use of this approach in the inference of dynamical models for cytokinetic ring constriction.
## II Terminal pseudo forces and the mixed nature of TSA ensembles
The intricacies arising from target state alignment become apparent already for the simplest case, i.e. a random search process for a target site [42; 43; 44; 45] (Fig. 2a). The TSA ensemble resulting from undirected random motion terminated at a target site is depicted in Fig. 2. By construction all sample paths concentrate near the ensemble mean close to the target state. If the TSA ensemble had an effective stochastic differential equation (SDE) with conventional low-noise behavior, a low-noise approximation could be used close to the target state such that the ensemble mean would be the zero noise solution and thus directly reveal the drift term. This however must obviously be wrong as the mean, which grows \(\propto\tau^{\frac{1}{2}}\), would indicate a diverging force \(\propto\frac{1}{L}\) although the forward dynamics are force free.
We analysed how such spurious pseudo forces arise for processes satisfying a Langevin equation of the form
\[d\widehat{L}(t)=f(\widehat{L})\,dt+\sqrt{D}\,dW_{t}\,. \tag{1}\]
Here \(f(\widehat{L})\) denotes a deterministic drift term and \(\sqrt{D}\) the strength of the fluctuations \(dW_{t}\), with \(dW_{t}\) the Wiener process increment with zero mean \(\langle dW_{t}\rangle=0\) and delta covariance \(\langle\,dW_{t}\,dW_{t}^{\prime}\rangle=\delta(t-t^{\prime})\). Each observation \(i\) consists of a trajectory \(\widehat{L}_{i}(t)\) with wall clock time \(t\) and lifetime \(T_{i}\).
The reverse-time TSA ensemble is described by a time-dependent distribution \(R(L,\tau)\) with \(L_{i}(\tau)=\widehat{L}_{i}(T_{i}-\tau)\) as a function of the reverse time \(\tau=T_{i}-t\). To construct \(R(L,\tau)\) two aspects must be taken into account: i.) the underlying dynamics evolve in reverse time; ii.) the lifetime of the trajectories is itself a random variable. Over time, less and less trajectories remain in the ensemble until eventually all trajectories have reached their lifetime. \(R(L,\tau)\) is thus not normalized and decays with \(\tau\).
In absence of noise (\(D=0\)), time-reversal of Eq. (1) is trivial. All trajectories starting at \(\widehat{L}_{0}\) end at \(\widehat{L}_{f}\) at time \(t=T\). The time-reversed dynamical equation is \(dL(\tau)=-f(L)d\tau\).
In a stochastic system however, changing the sign in front of the time derivative does not yield the correct time-reversed equation. For instance for a stationary Ornstein-Uhlenbeck processes \(f(\widehat{L})=-\widehat{L}\), inverting the sign of the drift term \(-L\to L\) results in exponentially diverging trajectories - a completely different behavior as in forward time.
Assuming that all sample paths are of the same lifetime \(T\), the correct time-reversal of the SDE Eq. (1) is [46]
\[dL(\tau)=\left(-f(L)+f^{G}(L)\right)d\tau+\sqrt{D}\,\,dW_{\tau}\,, \tag{2}\]
with
\[f^{G}(L)=D\frac{\partial}{\partial L}\log\left(P^{\rm fw}(L,T-\tau)\right) \tag{3}\]
a guiding force that depends on the solution \(P^{\rm fw}(\widehat{L},t)\) of the forward Fokker-Planck equation (FPE)
\[\frac{\partial}{\partial t}P(\widehat{L},t)=-\frac{\partial}{\partial\widehat {L}}f(\widehat{L})P(\widehat{L},t)+\frac{D}{2}\frac{\partial^{2}}{\partial \widehat{L}^{2}}P(\widehat{L},t) \tag{4}\]
with absorbing boundary conditions imposed at the target state \(L_{\rm ts}\). The guiding force \(f^{G}(L)\) ensures that the forward- and reverse-time dynamics are identical. Hence stating the time-reversed SDE in principle requires knowledge of the solution of the forward process and its initial distribution. Although time reversed dynamics might appear as a simple initial value problem it is not.
In a natural setting the lifetime \(T_{i}\) is not fixed but is itself a random variable. To assemble the ensemble in reverse time, we partition the trajectories into sub-ensembles \(R(L,\tau|T;L_{f})\) of fixed lifetime \(T_{i}\). In the example depicted in Fig.3 the initial condition of each
Figure 1: **Biological processes with target states.****(a)**: Cytokinetic ring constriction towards cell separation. **(b):** Neuronal evidence accumulation during decision making. **(c):** Activation of a gene regulatory network by concentration threshold crossing For each process, representative trajectories are shown. The target states (orange line) of cell separation **(a)**, decision- **(b)** or concentration-threshold **(c)**, and the completion time (orange triangle) of each individual trajectory are marked.
of the time-reversed sub-ensembles is a delta-function \(R(L,\tau=0|T_{i};L_{f})=\delta(L-L_{\rm ts})\). Because the guiding force Eq. (3) depends on the full forward distribution \(P^{\rm fw}(L,T_{i}-\tau)\) up to time \(T_{i}\) according to Eq. (2) and Eq. (3), each of these sub-ensembles experiences a different force.
To assemble a complete TSA ensemble, sub-ensembles satisfying Eq. (2) must be superimposed (Fig. 3). Three aspects are noteworthy. First, each sub-ensemble of lifetime \(T\) only contributes to the complete ensemble up to this time. Second, the relative weight of trajectories of this lifetime is given by the hitting time distribution of the forward process \(\rho(T|L_{f})\), which in turn depends on the forward initial condition. Third, the relative weight of each sub-ensemble depends on the initial conditions of the forward process \(P^{\rm in}(L_{f})\). Hence, \(R(L,\tau)\) for the aligned time-reversed ensemble is
\[R(L,\tau)=\int_{L_{\rm ts}}^{\infty}dL_{f}P^{\rm in}(L_{f})\int_{\tau}^{\infty} dT\ R(L,\tau|T;L_{f})\rho(T|L_{f})\ . \tag{5}\]
The lower integration limit of the inner integral accounts for the fact that only sub-ensembles that have at least a length of \(\tau\) contribute to the full ensemble. The outer integral accounts for the distribution of initial values of the forward process (see SI II).
Let us briefly consider the implications of the above result. First, for fixed lifetime sub-ensembles, we can state the general form of the guiding force close to the target state. With \(P^{\rm fw}(L,T-\tau)\) vanishing close to the absorbing state \(L_{\rm ts}\), the generic form of the density near \(L_{\rm ts}\) is to leading order \(P^{\rm fw}(L)\propto(L-L_{\rm ts})^{\delta}\) with \(\delta>0\). The guiding force close to the target state then evaluates to \(f^{G}(L)\propto\frac{1}{L}\), explaining the behavior of the mean of the TSA random walk (Fig. 2). Second, the construction of the TSA ensemble \(R(L,\tau)\) from sub-ensembles of varying lifetimes \(T_{i}\) (and thus varying guiding forces) suggests that in general there is no unique SDE which describes the full reverse-time dynamics. Different force laws seem to be active at the same state \((L,\tau)\). At first sight both of these observations should raise substantial caution. With reverse time dynamics generically "contaminated" by a diverging guiding force and TSA ensembles in principle a mixture of different effective dynamics, can TSA really provide a strategy to infer directed biophysical dynamics? To asses the power and limitations of TSA what is needed is a general understanding of the form of guiding forces of the complete ensemble and under which conditions the TSA ensemble follows a unique SDE as intuition would suggest.
## III Reverse-time FPE and SDE for the TSA ensemble
Starting from Eq. (5), we show in the SI II that the dynamics underlying the evolution of \(R(L,\tau)\) can be cast into the form of a generalized FPE
\[\frac{\partial}{\partial\tau}R(L,\tau)= -\frac{\partial}{\partial L}\left(\left(f(L)+f^{\mathcal{F}}(L) \right)R(L,\tau)\right)\] \[+\frac{D}{2}\frac{\partial^{2}}{\partial L^{2}}R(L,\tau)-P^{\rm in }(L)\rho(\tau|L) \tag{6}\]
with a time-dependent sink term \(-P^{\rm in}(L)\rho(\tau|L)\). Here \(P^{\rm in}(L)\) denotes the distribution of initial states for the
Figure 3: **Construction of the full aligned time reversed ensemble from sub-ensembles of different lifetimes.** (Left): The full forward ensemble is split into sub-ensembles with different completion times \(T_{i}\). We show three exemplary cases in red (\(T_{1}\)), green (\(T_{2}\)) and blue (\(T_{3}\)). To guide the eye, one sample path per sub-ensemble is highlighted. (Right): After target state alignment and time reversal all sub-ensembles together form the new ensemble \(R(L,\tau)\).
Figure 2: **Target state alignment creates pseudo forces for 1d random target search.****(a):** Random walk like sliding of transcription factor confined between inaccessible DNA and promoter binding site (orange). **(b):** (Left): Sample path realizations of a random walk with one reflecting and one absorbing boundary (orange). (Middle): Target state (promoter binding site) aligned sample paths. (Right): Aligned and time-reversed ensemble of sample paths. The mean, growing with \(\propto\tau^{\frac{1}{2}}\) (red line), indicates the presence of a non-linear alignment force.
forward dynamics. This term ensures that the distribution of sample path lifetimes in the reverse-time ensemble is the same as in the forward dynamics. Unlike for the sub-ensemble dynamics Eq. (2), \(f(L)\) is not sign inverted compared to the forward dynamics. The "free energy force"
\[f^{\mathcal{F}}(L)=D\frac{\partial}{\partial L}\log\left(\int_{L_{\rm ss}}^{L} dL^{\prime}\;e^{-\frac{2\Phi(L^{\prime})}{D}}H(L^{\prime})\right) \tag{7}\]
captures the combined effect of the total entropy production of all sample paths and the time-reversion of the dynamics up to a position \(L\). The potential \(\Phi(L)=\int^{L}f(L^{\prime})dL^{\prime}\) is defined with respect to the sign inverted drift term. \(H(L)=1-\int_{L_{\rm ss}}^{L}P^{\rm in}(L^{\prime})dL^{\prime}\) is a sigmoidal function which continuously changes from one to zero. Above the bulk of \(P^{\rm in}(L)\) the free energy force \(f^{\mathcal{F}}(L)\) therefore vanishes, and forward and TSA dynamics are indistinguishable. Below however the free energy force not only reverses the forward dynamics but also adds additional terms, for instance the guiding force term \(\propto\frac{1}{L}\).
From the theory of reaction-diffusion systems [47] we know that a Fokker-Planck equation with sink proportional to the density can be cast into a SDE with killing measure \(k(L,\tau)\) proportional to the rate of degradation [48; 49; 50]. Analogously for the reverse-time FPE for TSA ensembles the corresponding SDE reads
\[dL(\tau)=\left(f(L)+f^{\mathcal{F}}(L)\right)d\tau+\sqrt{D}\ dW_{\tau}\,, \tag{8}\]
equipped with a killing measure \(k(L,\tau)d\tau=\frac{\rho(\tau|L)P^{\rm in}(L)}{R(L,\tau)}d\tau\). For finite TSA ensembles originally comprised of \(n_{\rm ens}\) sample paths, this implies that the killing measure terminates \(n_{\rm ens}\rho(\tau|L)d\tau\) of the remaining sample paths at each timestep \(d\tau\) as determined by the weight \(\frac{P^{\rm in}(L)}{R(L,\tau)}\). The explicit lifetime dependency of the sub-ensemble based construction Eq. (5) can thus be moved to a time- and ensemble-dependent boundary condition in the form of a killing measure.
## IV The TSA ensemble close to the target state
Expressing the dynamics of \(R(L,\tau)\) in terms of a single SDE allows to separate the contribution of the forward initial conditions from the pure TSA dynamics close to the target state. This can be seen by inspecting \(f^{\mathcal{F}}(L)\) defined in Eq. (7). For \(L\) sufficiently below the bulk of the forward initial distribution \(P^{\rm in}(L)\) the term \(H(L)\), which describes the influence of the forward initial conditions on the reverse time TSA dynamics, evaluates to approximately one. If most of the sample paths have not yet reached the sink at \(P^{\rm in}(L)\), this in turn implies that almost no sample path has terminated in reverse time. The killing measure \(k(L,\tau)\) evaluates to zero and \(H(L)\approx 0\). With \(k(L,\tau)=0\) and \(H(L)=1\) the remaining unique SDE describes TSA dynamics temporally and spatially close to target states and irrespective of initial conditions. Fig. 4 depicts one example in which the ensemble mean and variance in this approximation agrees excellently with the exact solution. In the SI V we discuss further examples that illustrate the accuracy of this approximation.
### Separating genuine and alignment induced forces
Next we examined, using this approximation, the general mapping between forward forces and aligned reverse-time dynamics close to the target state. Using \(H(L)=1\) we calculated a mapping for power-law forces
\[f(\widetilde{L})=-\gamma\widetilde{L}^{\alpha}\, \tag{9}\]
Figure 4: **TSA FPE and SDE exactly describe the target state aligned ensemble.****(a):** Schematic depiction of the dependency of the TSA dynamics Eq. (6) and Eq. (8) on the forward initial condition \(H(L)\) and the killing measure \(k(L,\tau)\). In the green region, i.e. above the bulk of the initial distribution measured by the sigmoidal \(H(L)\), we find \(f^{\mathcal{F}}(L)\approx 0\). Below \(f^{\mathcal{F}}(L)\) contributes in full strength. In \(\tau\) direction, an increase in the red gradient indicates, that more and more trajectories are killed. The residual white \(L-\tau\) plane close to the target state defines the region where \(k(L,\tau)\approx 0\) and \(H(L)=1\) holds, and the approximation of well separated initial and final states holds. Circles mark the killing of example trajectories. **(b):** Comparison of the forward (blue), sub-ensemble based (red), exact (black) and approximate (green) reverse-time dynamics for \(f(L)=-\gamma/L\). Shown are the mean (Top) and variance (Bottom) of all four cases. 95% bootstrap confidence intervals are shown for the cases involving sampling. To exclude numerical inaccuracies due to rarely visited tails of the distribution of completion times \(\rho(T|L_{f})\), we directly sampled \(T_{i}\) from the numerically obtained hitting time distribution of the forward process. Results were obtained using each 1000 sample path realizations with parameter settings \(\gamma=1\), \(D=0.2\), \(\widetilde{L}_{\rm init}=2\).
with \(\gamma>0\), \(\alpha\in\mathbb{R}\), target state at \(\widehat{L}_{\rm tp}=0\) and \(\widehat{L}_{\rm in}\to\infty\). The free energy force is given by
\[f^{\mathcal{F}}(L)=\frac{(D\alpha+D)\left(-\frac{2\gamma}{D\alpha+D}\right)^{ \frac{1}{\alpha+1}}e^{\frac{2\gamma L^{\alpha+1}}{D\alpha+D}}}{\Theta(\alpha+1 )\Gamma\left(\frac{1}{\alpha+1}\right)-\Gamma\left(\frac{1}{\alpha+1},-\frac{ 2L^{\alpha+1}\gamma}{\alpha D+D}\right)}. \tag{10}\]
Eq. (10) is valid for \(\alpha\neq-1\), with the connecting form \(f^{\mathcal{F}}(L)=\frac{2\gamma+D}{L}\) at \(\alpha=-1\). Here \(\Gamma(L)\) is the gamma-function, \(\Gamma\left(n,L\right)\) the upper incomplete gamma-function and \(\Theta(n)\) the Heaviside-function. The derivation is presented in the SI V.
### Noise induced vs. force induced target state arrival
For power-law forces, the behavior near arrival falls into two generic classes. For \(\alpha\geq 0\) the forward force vanishes at the target state and the system typically terminates _noise induced_ (Fig. 5a). Aligned, in reverse time and close to the target state \(\widehat{L}_{\rm ts}=0\), such an ensemble is indistinguishable from the dynamics of a random walk (\(\alpha=0,\gamma=0\)) with \(f^{\mathcal{F}}(L)=\frac{D}{L}\) (SI IV). We tested this behavior in Fig. 5b for three different noise strengths \(D\). The analytic expression for the mean
\[\overline{L}(\tau)=\sqrt{\frac{8D}{\pi}\tau}\, \tag{11}\]
of the aligned reverse-time random walk almost perfectly matches the mean of the aligned reverse-time data for the case \(\alpha=1\) and close to \(\widehat{L}_{\rm ts}=0\). It starts deviating for \(Lf(L)>D\) where the approximation is expected to break down (SI V). Note that Eq. (11) also explains the result found for the random target search (Fig. 2).
For \(\alpha<0\) the termination of the forward dynamics is _force induced_ as \(f(\widehat{L})\) diverges at the absorbing boundary. The corresponding reverse time dynamics exhibit the genuine (sign inverted) force law dependence plus corrections proportional to the noise strength \(D\). Fig. 5c,d shows exemplary sample paths for \(\alpha=-1\). It demonstrates the gradual deviation of the mean from the deterministic solution with increasing noise strength \(D\).
Further insight can be obtained by examining the small \(L\) weak noise regime of the free energy force Eq. (10). In this regime the reverse time SDE for \(\alpha<0\) simplifies to
\[dL(\tau)=\left(\gamma L^{\alpha}-\frac{\alpha D}{L}\right)d\tau+\mathcal{O} \left(\frac{D^{2}}{L^{2+\alpha}}\right)+\sqrt{D}\ dW_{\tau}\, \tag{12}\]
see SI VI. This shows that for all \(\alpha<0\) the leading order correction to the time reversed genuine force is random-walk like (\(\sim D/L\)), modulated in its strength by the power-law exponent \(\alpha\). Despite its simplicity Eq. (12) can not be solved analytically for general \(\alpha<0\). Adopting the low noise expansion for moments of ordinary SDEs [51], we expand Eq. (12) around its deterministic solution (\(D\to 0\)) for small \(D\) and obtain
\[\overline{L}(\tau)=\left((1-\alpha)\gamma\tau\right)^{\frac{1}{1-\alpha}}+D \frac{(7\alpha-3)((1-\alpha)\gamma\tau)^{\frac{\alpha}{\alpha-1}}}{4(3\alpha- 1)\gamma} \tag{13}\]
\[\sigma_{L}^{2}(\tau)=D\frac{1-\alpha}{1-3\alpha}\tau \tag{14}\]
\[\mathrm{corr}_{L}(\tau,\tau^{\prime})=\left(\frac{\min[\tau,\tau^{\prime}]}{ \max[\tau,\tau^{\prime}]}\right)^{\frac{3}{2}+\frac{1}{\alpha-1}} \tag{15}\]
for mean, variance and two-time correlation function up to order \(D\) (SI VI). In Fig. 5e,f we compare this approximation to simulations and find excellent agreement.
Motivated by their high accuracy we propose Eqs. (13)-(15) as basis for fast inference schemes: Given sufficiently good statistics, \(\alpha\) can be read of directly from a 2d correlation plot using Eq. (15) (Fig. 5g,h). Once \(\alpha\) is identified, \(D\) can be extracted from the variance Eq. (14). The force strength \(\gamma\) follows directly from a \(\alpha\) and \(D\) constrained fit of the mean Eq. (13). Notably this is quite distinct from a normal low noise approximation, in which the mean would be the zero noise solution and thus only depend on \(\gamma\).
## V Inferring actomyosin turnover from the terminal dynamics of cytokinesis
To demonstrate the inference of biological dynamics from TSA ensembles we set up a biophysical model of cytokinetic ring constriction which has all ingredients that one expects to complicate the correct identification of the directional dynamics of cytokinesis. The force driving cytokinesis is mediated by a contractile ring of myosin motors and actin filaments. The contractile behavior of this ring can exhibit very different types of concentration-tension relationships depending on the regime of molecular turnover. As a single myosin minifilament needs to bind two actin fibers to exert a force the tension can be assume \(\propto c_{\rm myo}c_{\rm act}^{2}\). Three qualitatively different types of dynamics are conceivable. First, the effective constriction force is proportional to the myosin concentration along the cable of length \(L\). The shorter the cable gets, the more myosin per ring perimeter accumulates. The constriction force increases \(\propto 1/L\) (\(\alpha=-1\)), assuming a low myosin turnover rate.
Second, with strong myosin turnover the effective constriction force reflects the actin fiber concentration and thus is \(\propto 1/L^{2}\) (\(\alpha=-2\)). The third mechanism is favored for high turnover rates of both force generating molecules. Independent of \(L\), force molecules are under this scenario present at a constant concentration. The constriction force is constant (\(\alpha=0\)).
To test our inference scheme and to generate artificial cytokinetic sample paths we use a model by Zumdieck et. al. [52], which covers all three scenarios as limiting
cases. The details of the model and the analytic limits are covered in the SI VII. In our simulations the full process of constriction starts from a equilibrium perimeter which initially is stable. The dynamics then transition to the final regime which leads to ring constriction and separation. Internal fluctuations of the ring perimeter are modeled as white noise. After target state alignment we obtain a TSA ensemble with contributions from both biological and guiding forces. Its target and equilibrium state are well separated and the dynamics should assume a pure force law short before cell separation.
For the quantitative inference of the most likely constriction scenario, we evaluated the path-ensemble likelihood for the TSA ensemble forces Eq. (7) with \(H(L)=1\) and for each of the three scenarios with \(\alpha=0,-1,-2\) (for details see SI VII,VIII). The correct case \(\alpha=-1\) was clearly singled out by the highest maximum likelihood value (Fig. 6). The other scenarios can also be tested and unambiguously identified (see SI VII).
Without considering the peculiarities of TSA ensembles several mistakes in interpreting the sample data might have occurred. First, a scenario of constant force could erroneously be classified as one driven by a divergent force component which we now understand as the guiding force correction of the form \(D/L\). Second, the strength of the force for the myosin dominated case could be overestimated as both genuine and guiding force are proportional to \(1/L\). Third, for the actin dominated case an erroneous crossover from \(1/L\) to \(1/L^{2}\) might be inferred. In contrast, our TSA method obtains the genuine force law for all three cases.
## VI Discussion
In this study, we examined the stochastic dynamics problem associated with inferring directed biological processes by target state alignment. We show whether and when such dynamics can be represented by a single SDE and how spurious forces, which inevitably arise due to target state alignment, can be separated from genuine biological forces. The universal low-noise and short-term behavior of TSA ensembles are derived. The biophysical applicability is demonstrated for a model of cytokinetic ring constriction, an example for directional dynamics, containing all potential confounders of correct inference.
Previously target state alignment has been used as means of data analysis in various fields. It has been employed to access entropic force differences of DNA in confined spaces [53], or to determine the dynamics leading to a behavioral decision e.g. represented by the initiation f saccadic eye movements [14]. The inevitable occurrence of spurious forces in TSA ensembles however has so far not been noted or analyzed.
Figure 5: **TSA ensembles are noise and parameter sensitive, and qualitatively differ for diffusion or drift dominated target state approaches.****(a):** For \(\alpha>0\) only noise induced irreversible transitions occur in finite time. **(b):** Close to their target state reverse time statistics of noise induced transitions (\(\alpha>0\)) are indistinguishable from random fluctuations and thus \(\alpha\)-independent. The mean grows \(\sim\tau^{\frac{1}{2}}\) (random-walk-like, Eq. (11), dotted) independent of the noise strength \(D\) as shown in both the log-log inset and main plot. **(c):** The hitting time of force driven irreversible transitions (\(\alpha<0\)) scatters around the deterministic solution (black line). **(d):** The mean of the aligned reverse time ensemble (dashed line) for \(\alpha<0\) deviates positively from the deterministic solution with increasing noise strength \(D\). Together **(c)** and **(d)** motivate a small noise moments expansion. **(e),(f):** The small noise expansions of mean Eq. (13) and variance Eq. (14) (lines) agree with the numeric evaluation of the reverse-time TSA dynamics with \(f^{\mathcal{F}}(L)\) Eq. (10) (circles). **(g),(h):** The small noise two-time correlation Eq. (15) solely depends on the power-law exponent \(\alpha\). Linear equi-correlation lines (dashed) allow for an easy distinction. For all plots we chose \(\gamma=1\) and \(D=0.2\) if not stated differently.
Examining the structure of TSA ensembles we find that they fall into two distinct classes. For power-law exponents \(\alpha>0\), forces vanish at the boundary and directed dynamics become indistinguishable from the dynamics of a target state aligned random search process near termination. For power-law exponents \(\alpha<0\), target state directed dynamics are force induced and reverse-time statistics can be treated as corrections to the zero noise solution. Expanding on these results, we present small noise expressions for reverse-time mean, variance and two-time correlation function valid for force induced target state convergence. The generic analysis of noise and force induced transitions can serve as guidance when searching for the driving mechanism. For direct high precision inference of the true biological forces, we propose a path ensemble maximum likelihood scheme. Its applicability for the distinction of different scenarios of cytokineetic ring constriction also highlights possible force miss-assignments when ignoring the peculiarities of TSA ensembles.
The TSA approach introduces a new perspective to the design of experiments. It makes the search for suitable initial conditions superfluous and facilitates the investigation of directed dynamics under conditions that are as natural as possible. Nevertheless, the identification of transition times, when dynamics change e.g. from an equilibrium state into target state directed dynamics such as during cytokinesis, remains a biologically highly relevant question. The here sketched path ensemble framework provides a possible route to this inference problem. A further generalization of the formalism at hand might include different types of fluctuating environments, the inclusion of external variables and an extension of the framework to more than one degree of freedom.
Concluding, the here presented theory provides the mathematical foundations for the inference of TSA dynamics. It offers an intuitive understanding of the characteristics of TSA ensembles and provides a new tool for the study of directed biological dynamics.
###### Acknowledgements.
We thank Matthias Haring, Erik Schultheis and the Wolf group for stimulating discussions and proofreading of the manuscript. This work was supported by the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG) through FOR 1756, SPP 1782, SFB 1528, SFB 889, SFB 1286, SPP 2205, DFG 436260547 in relation to NeuroNex (National Science Foundation 2015276) & under Germany's Excellence Strategy - EXC 2067/1- 390729940; by the Leibniz Association (project K265/2019); and by the Niedersachsisches Vorab of the VolkswagenStiftung through the Gottingen Campus Institute for Dynamics of Biological Networks.
Figure 6: **Accurate inference of actomyosin turnover from dynamics in the noise extended Zumdieck model [52] of cytokineetic ring constriction.****(a):** Schematic representation of three different cytokinesis scenarios assuming constriction-forces proportional to molecule concentrations along the ring: Constant shrinkage due to high myosin turnover (\(\alpha=0\), orange), \(1/L\)-shrinkage with stalled myosin turnover (\(\alpha=-1\), green) and \(1/L^{2}\)-shrinkage for actin-pair driven constriction with stalled turnover (\(\alpha=-2\), red). **(b):** Using path-ensemble maximum likelihood inference (SI VIII) the dynamical law leading to constriction can unambiguously be inferred from realizations of the Zumdieck model (grey lines, blue circles). We visually confirm the inferred underlying force law (\(\alpha=-1\)) by numerically evaluating mean and variance for the reverse-time TSA dynamics for each of the three inferred maximum likelihood parameter sets (\(\gamma_{\text{ML}}^{\text{ML}},D_{\text{ML}}^{\text{ML}}\)). **(c):** The two-time correlation of the Zumdieck model and the correlations of the inferred dynamics for \(\alpha=-1\) are in perfect agreement. To facilitate the comparison we show one equi-correlation line for both (Zumdieck: blue circles, \(\alpha=-1\): green dashes). | メソスケール生物系は、細胞周期のチェックポイントや特定の行動の放出のための決定境界など、機能的に重要なターゲット状態に向けて進化することがあります。データに基づく推論では、この基盤となる方向性非平衡 dinámicaを解析するため、ターゲット状態にアラインされた(TSA)エンsemblesを開発しました。ターゲット状態アラインメントは、逆時間の方向的動態を分析することを可能にし、前処理の最終状態から開始することができます。前処理の初期条件に関する情報は、分析には必要ありません。私たちの理論は、この系が単一の有効的な確率的運動方程式で表されるかどうかを示しています。有効なダイナミクスにおいて、真の生物学的力が雑音と分離される仕組みを明らかにしました。これは、ターゲット状態アラインメントによって必然的に発生する雑音です。私たちは、この推論のアルゴリズムを細胞分裂環の収縮の例に適用 |
2307.15114 | The Morphology of Exciting Dark Matter and the Galactic 511 keV Signal | We study the morphology of the 511 keV signal that could be produced by
exciting dark matter (XDM) in the Milky Way. In this model, collisions between
dark matter particles excite the dark matter to a state that can then decay
back to the ground state, releasing an electron-positron pair. These electrons
and positrons would then annihilate, producing 511 keV photons that could
explain the 511 keV signal seen by INTEGRAL at the Galactic Center. We compare
the resulting flux with the most recent INTEGRAL data, performing the first
full statistical analysis of the exciting dark matter model. We focus on
exciting dark matter in the mass and cross section ranges 100 GeV $\lesssim
m_{\chi} \lesssim$ 3 TeV and $10^{-19}$ cm$^3$ s$^{-1} \lesssim \langle \sigma
v \rangle \lesssim 10^{-16}$ cm$^3$ s$^{-1}$. We show that exciting dark matter
can provide a significantly better fit than the simpler case of annihilating
dark matter, with $\Delta\chi^2 > 16$ for all but one of the density profiles
we consider. | Christopher V. Cappiello, Michael Jafs, Aaron C. Vincent | 2023-07-27T18:00:02 | http://arxiv.org/abs/2307.15114v2 | # The Morphology of Exciting Dark Matter and the Galactic 511 keV Signal
###### Abstract
We study the morphology of the 511 keV signal that could be produced by exciting dark matter (XDM) in the Milky Way. In this model, collisions between dark matter particles excite the dark matter to a state that can then decay back to the ground state, releasing an electron-positron pair. These electrons and positrons would then annihilate, producing 511 keV photons that could explain the 511 keV signal seen by INTEGRAL at the Galactic Center. We compare the resulting flux with the most recent INTEGRAL data, performing the first full statistical analysis of the exciting dark matter model. We focus on exciting dark matter in the mass and cross section ranges 100 GeV \(\lesssim m_{\chi}\lesssim 3\) TeV and \(10^{-19}\) cm\({}^{3}\) s\({}^{-1}\lesssim\langle\sigma v\rangle\lesssim 10^{-16}\) cm\({}^{3}\) s\({}^{-1}\). We show that exciting dark matter can provide a significantly better fit than the simpler case of annihilating dark matter, with \(\Delta\chi^{2}>16\) for all but one of the density profiles we consider.
## I Introduction
More than 50 years after it was first observed [1], the 511 keV gamma-ray line at the Galactic Center remains unexplained [2; 3; 4]. Despite the difficulties in characterizing a flux of \(\sim\) MeV gamma rays, successive analyses of data from CGRO/OSSE [5] and then INTEGRAL/SPI [6; 7; 8; 9] have led to a consistent picture: a significant 511 keV signal from the galactic disk, combined with a large, spherically symmetric "bulge" signal centered on the the galactic center and extending roughly 10 degrees above and below the galactic plane. The bulge and disk both contribute about \(10^{-3}\) ph cm\({}^{-2}\) s\({}^{-1}\), and the spectra are consistent with a positronium formation rate around unity, pointing at low-energy (\(\lesssim 10\) MeV) injection of positrons into the interstellar medium (ISM). Much of the disk emission can be attributed to \(\beta^{+}\) decays of \({}^{26}\)Al, \({}^{44}\)Ti and \({}^{56}\)Ni, and indeed, the concurrent 1809 keV gamma signal from \({}^{26}\)Al \(\rightarrow\)\({}^{26}\)Mg\({}^{*}\)\(+\)\(\beta^{+}\), \({}^{26}\)Mg\({}^{*}\)\(\rightarrow\)\({}^{26}\)Mg \(+\gamma\) provides an additional handle on radioactive isotope contribution. However, the bulge contribution is uncorrelated with other observations and cannot be accounted for with ordinary astrophysical processes [2]. This fact, and the presence of this significant spheroidal component have motivated numerous attempts to identify the excess as a signal of dark matter [10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30] or other physics beyond the Standard Model, such as cosmic strings [31; 32].
To successfully explain the galactic center signal, a model must correctly predict: 1) the total flux, 2) the line spectrum, and 3) the observed morphology. Annihilating dark matter of mass \(m_{\chi}\) requires an annihilation cross section \(\langle\sigma v\rangle/m_{\chi}^{2}\sim 10^{-25}\) cm\({}^{3}\) s\({}^{-1}\) GeV\({}^{-2}\)[26]. Obtaining the correct spectrum requires \(m_{\chi}\lesssim 10\) MeV, otherwise final-state radiation and in-flight annihilation risk overproducing photons above 511 keV [33; 34; 35]. Combining these constraints significantly restricts the parameter space, leading to cross sections \(\langle\sigma v\rangle\sim 10^{-31}\), which would either severely overproduce dark matter in the early Universe, or require additional annihilation to light products, in conflict with big bang nucleosynthesis (BBN) or recombination observations [36]. Finally, the morphology of the signal predicted from dark matter annihilation can end up being _too_ cuspy, based on the more common models of the Milky Way halo [26].
A class of dark matter models that addresses these drawbacks is exciting dark matter (XDM), in which collisions between dark matter particles excite the dark matter into a state which can then decay to a lower-energy state via emission of an electron-positron pair [16]. Such a model has the advantage of producing relatively low-energy electron-positron pairs even if the dark matter mass is in the GeV-TeV range characteristic of a WIMP. The velocity threshold required to produce an excitation also suppresses \(e^{+}\) production very close to the galactic centre where the velocity dispersion is lower, leading to a less cuspy predicted signal with respect to DM annihilation. Much work has focused on whether such a model could produce the correct 511-keV flux [19; 20; 21; 23; 25], and while some analyses of the morphology were included, these were performed before state-of-the art kinematic analyses of the Milky Way dynamics were available, did not use data based on the latest SPI measurements, and most did not attempt to quantify a goodness of fit between model predictions and available data.
In this work, we thus compare the morphology of the predicted XDM signal with the signal produced by more traditional annihilating dark matter. By testing these predictions against the most recent INTEGRAL data, we show that exciting dark matter provides a significantly better fit, for most realistic choices of the dark matter density profile.
Dark Matter Models
### Exciting Dark Matter
In the XDM scenario, the DM density is primarily composed of a stable state \(\chi_{0}\), which can be excited to a higher-energy state \(\chi_{1}\) and subsequently decay, producing an electron-positron pair. The excitation is induced by a collision between two DM particles, in contrast with the case of inelastic DM, which usually refers to collisions between DM and Standard Model particles in a detector (see e.g. [37]). To produce an electron-positron pair, the mass splitting must be at least twice the electron mass:
\[\delta m\equiv m_{1}-m_{0}\geq 2m_{e}\,. \tag{1}\]
In this work, we assume that \(\delta m=2m_{e}\).
This excitation is only possible if the center-of-mass energy is at least \(2m_{0}+\delta m\), meaning that the relative velocity between the two DM particles must be above a threshold:
\[v_{th}=\sqrt{4\delta m/m_{0}}\,. \tag{2}\]
This threshold introduces a velocity dependence to the cross section for excitation to occur [16]:
\[\sigma v_{rel}=\begin{cases}\sigma_{mr}\sqrt{v_{rel}^{2}-v_{th}^{2}}&v_{rel}>v _{th}\\ 0&v_{rel}\leq v_{th}\,,\end{cases} \tag{3}\]
where \(\sigma_{mr}\) is the cross section in the moderately-relativistic limit. As we will see below, the dependence on relative velocity translates into a dependence on galactocentric radius, suppressing the excitation rate near the Galactic Center.
### Annihilating Dark Matter
Simpler than the above scenario is the case where a DM pair annihilates to an electron-positron pair. In this case, there is no threshold velocity, and no need for multiple DM states with a specific mass splitting. However, if the DM is very heavy, then an additional gamma-ray component above 511 keV would be expected from in-flight annihilation and final-state radiation, and the absence of such a signal limits the annihilating DM mass to \(m\lesssim\) 3-10 MeV [34], which puts this scenario in tension with cosmological bounds on the minimum mass of thermal DM [38; 39; 40; 41; 42; 36] (although see Ref. [43]). We take annihilating DM as an alternative to XDM, and compare how well the morphology of both signals fits observations.
## III Signal Rate Calculation
### Dark Matter Signal
Assuming that the electron-positron production and annihilation are in equilibrium, that each collision yields one \(e^{+}e^{-}\) pair, and that the electrons and positrons do not travel far before annihilating, the flux per steradian of 511 keV gamma rays for a given galactic longitude \(l\) and latitude \(b\) is given by the line of sight integral [16]
\[\Phi_{511}(b,l)=\frac{1-0.75f_{p}}{4\pi}\int_{0}^{\infty}\mathrm{d}x\langle \sigma v\rangle\left(\frac{\rho(r)}{m_{\chi}}\right)^{2}\,, \tag{4}\]
where \(f_{p}=0.967\) is the positronium fraction. Because of the form of Eq. (8), and the radial dependence of the dark matter halo dispersion velocity, the thermally-averaged cross section \(\langle\sigma v\rangle\) is a function of the galactocentric radius \(r(x,l,b)\).
We consider two types of profile for the DM density distribution: a generalized Navarro-Frenk-White (gNFW) profile [44; 45] and an Einasto profile [46], given respectively by
\[\rho_{NFW}(r)=\frac{2^{3-\gamma}\rho_{s}}{(r/r_{s})\gamma(1+r/r_{s})^{3- \gamma}}\,, \tag{5}\]
\[\rho_{Ein}(r)=\rho_{s}\exp\left[-\frac{\alpha}{2}((r/r_{s})^{\alpha}-1)\right]\,. \tag{6}\]
For both profiles, we use the best-fit values of the local density, slope, and scale radius obtained by Ref. [47] from rotation curve data of the Milky Way. We note that these data only extend down to radii of about 5 kpc, making it difficult to constrain the DM density at the Galactic Center, where the 511-keV signal that we compute should be strongest. For this reason, we consider four different models of the DM density profile: one Einasto and one gNFW profile for each of the two baryonic models used in Ref [47], denoted B1 and B2. The values of these parameters are shown in Table 1.
To compute the radial dependence of the thermally averaged cross section, we require the relative velocity distribution of DM as a function of its position in the galaxy. We model the velocity distribution at any point in the galaxy as a Maxwellian, as in the Standard Halo Model [48]:
\[f(\vec{v})=\frac{1}{N_{esc}(2\pi\sigma_{v}^{2})^{3/2}}\,e^{-\frac{\phi^{2}}{ 2\sigma_{v}^{2}}}\,\Theta(v_{esc}-v)\,, \tag{7}\]
where \(N_{esc}\) is a normalization constant accounting for the cutoff at \(v_{esc}\). Neglecting the escape velocity cutoff, the distribution of relative velocities between 2 particles takes the same form, with the replacement
\begin{table}
\begin{tabular}{l c c c} \hline Model & \(\rho_{DM}\) [GeV/cm\({}^{3}\)] & \(r_{s}\) [kpc] & \(\alpha,\gamma\) \\ \hline \hline gNFW1 & 0.30 & 9 & 1.2 \\ Einasto1 & 0.30 & 11 & 0.11 \\ gNFW2 & 0.39 & 8.1 & 1.3 \\ Einasto2 & 0.38 & 9.2 & 0.18 \\ \hline \hline \end{tabular}
\end{table}
Table 1: List of DM density profile models, with parameters from the B1 and B2 models of Ref. [47].
(see, e.g., Ref. [49]). When we include the cutoff, as well as the velocity dependence of the cross section, we get a more complicated form for the thermally averaged cross section:
\[\langle\sigma v\rangle=\langle\sigma v\rangle_{mr}2\pi\left(\frac{1 }{N_{esc}}\frac{1}{(2\pi\sigma_{v}^{2})^{3/2}}\right)^{2}\\ \times\int_{v_{th}}^{2v_{esc}}\mathrm{d}v_{rel}\sqrt{v_{rel}^{2}- v_{th}^{2}}e^{-\frac{2v_{esc}^{2}+v_{rel}^{2}}{2\sigma_{v}^{2}}}\sigma_{v}^{3} \left(2\sigma_{v}(e^{\frac{v_{rel}^{2}}{2\sigma_{v}^{2}}}-e^{\frac{v_{esc}v_{ rel}}{2\sigma_{v}^{2}}})+\sqrt{\pi}v_{rel}e^{\frac{4v_{esc}^{2}+v_{rel}^{2}}{4 \sigma_{v}^{2}}}\mathrm{Erf}\left(\frac{2v_{esc}-v_{rel}}{2\sigma_{v}} \right)\right)\,. \tag{8}\]
In order to use Eq. (8), we need \(\sigma_{v}\) and \(v_{esc}\) as a function of position in the galaxy. Assuming spherical symmetry, \(\sigma_{v}\) can be computed using the Jeans Equation [49]:
\[\sigma_{v}^{2}(r)=\frac{1}{\rho(r)}\int_{\infty}^{r}\rho(r)\frac{\mathrm{d} \phi}{dr}dr\,, \tag{9}\]
where \(\rho(r)\) is the DM density and \(\phi(r)\) is the gravitational potential of the galaxy (DM and baryonic components). The gravitational potential due to the DM is just
\[\phi_{DM}(r)=\int_{\infty}^{r}\frac{GM_{enc}}{r^{\prime 2}}dr^{\prime}=4\pi G \int_{\infty}^{r}\frac{\mathrm{d}r^{\prime}}{r^{\prime 2}}\int_{0}^{r^{ \prime}}\mathrm{d}r^{\prime\prime}\rho(r^{\prime\prime})r^{\prime\prime 2}\,. \tag{10}\]
To make use of Eq. (9), we employ spherically symmetrized potentials for the baryonic components, as in Refs. [50; 51; 52]. We include the bulge and disk potentials used in Ref. [52],
\[\phi_{Bulge}(r)=-\frac{GM_{b}}{r+c_{0}}\,, \tag{11}\]
\[\phi_{Disk}(r)=-\frac{GM_{d}}{r}(1-e^{-r/b_{d}})\,, \tag{12}\]
where \(M_{b}=1.5\times 10^{10}\) M\({}_{\odot}\) is the mass of the bulge, \(c_{0}=0.6\) kpc is the bulge scale radius, \(M_{d}=7\times 10^{10}\) M\({}_{\odot}\) is the mass of the disk, and \(b_{d}=4\) kpc is the disk scale radius. Our final model for the gravitational potential is then
\[\phi(r)=\phi_{DM}(r)+\phi_{Bulge}(r)+\phi_{Disk}(r)\,. \tag{13}\]
Once we have the potential, the escape velocity is also easy to compute [52]:
\[v_{esc}(r)=\sqrt{-2\phi(r)}\,. \tag{14}\]
With all these pieces, we can now use Eq. (4) to compute the 511 keV flux due to both exciting and annihilating DM.
### Disk Flux
On top of a possible 511 keV signal from DM, we know that there should exist 511 keV emission from the galactic disk, due to radioactive isotopes such as \({}^{26}\)Al. We model the disk emission using the Robin young disk model [53]:
\[\dot{n}(r)=\dot{n}_{0}\left(e^{-\frac{a^{2}}{R_{0}^{2}}}-e^{-\frac{a^{2}}{R_ {1}^{2}}}\right)\,, \tag{15}\]
where \(R_{0}\) = 5 kpc, \(R_{i}\) = 3 kpc, and
\[a=\sqrt{x^{2}+y^{2}+z^{2}/\epsilon^{2}}\,. \tag{16}\]
Here \(\epsilon\) parametrizes the disk scale height, and in our fit is allowed to vary between 0.01 and 0.03.
\({}^{26}\)Al \(\beta^{+}\) decay leads to an excited state of \({}^{26}\)Mg, which yields a deexcitation gamma line at 1809 keV. Based on 1809 keV measurements, the corresponding total 511 keV emission due to \({}^{26}\)Al is (7.33 \(\pm\) 0.89) \(\times 10^{-4}\) ph cm\({}^{-2}\) s\({}^{-1}\)[26; 54]. The disk emission could be larger than this, due to other elements (which do not come with such a convenient tracer line), but it should not be smaller. We use this value as an additional constraint on the value of \(\dot{n}_{0}\), as described below.
## IV Data Fitting
We compare the computed 511 keV flux from the disk and DM to the data from SPI aboard the INTEGRAL satellite used in Refs. [55; 9] (data provided by the authors). The data are based on 11 years of exposure, with some gaps due to solar activity, calibration or annealing. Recent analyses have made extensive use of the \(\sim\)3-year data presented in Ref. [6]. We stress that more exposure is particularly important in teasing out the morphology of such a signal because of the shallow intensity gradients involved and the large isotropic (instrumental) background 511 keV event rate.
Here we describe the dataset used in terms of galactic longitude \(l\) and latitude \(b\). At longitudes near the galactic center, we consider three latitudinal profiles, for the longitude ranges \(-15.25^{\circ}<l<-6.25^{\circ}\), \(-5.25^{\circ}<l<3.75^{\circ}\), and \(4.75^{\circ}<l<13.75^{\circ}\). Each profile consists of 7, 3-degree bins in the range from \(-10.75^{\circ}<b<10.25^{\circ}\). Data points for the central profile are shown in Fig. 1. For longitudes further from the galactic center, we no longer have latitude profiles. Instead, we have a longitudinal profile integrated over all latitudes
\(b<10.25^{\circ}\), in 3-degree longitude bins covering the range \(-31.75^{\circ}<l<34.25^{\circ}\). To avoid overlap with the latitudinal profiles, we use the longitudinal profile only in the ranges \(-31.75^{\circ}<l<-16.75^{\circ}\) and \(16.25^{\circ}<l<34.25^{\circ}\). This is shown in Fig. 2, where we have coloured the data points that do not contribute to the likelihood analysis in periwinkle blue.
For each of the DM profiles described in Table 1, we fit the DM plus disk flux to the data described above. We define a modified \(\chi^{2}\), which penalizes the fit if it requires a disk flux smaller than the disk flux inferred from 1809 keV data:
\[\chi^{2}=\sum_{i}\frac{(y_{i,fit}-y_{i})^{2}}{\sigma_{i}^{2}}+\frac{(D_{fit}-D_ {Al})^{2}}{\sigma_{Al}^{2}}\Theta(D_{fit}-D_{Al})\,, \tag{17}\]
where \(D_{fit}\) is the fit value of the all-sky disk flux, \(D_{Al}=7.33\times 10^{-4}\) ph cm\({}^{-2}\) s\({}^{-1}\) is the inferred flux from 1809 keV data and \(\sigma_{Al}=8.9\times 10^{-5}\) ph cm\({}^{-2}\) s\({}^{-1}\) is the uncertainty on the aluminum contribution. The step function ensures that this term only penalizes the fit for a disk contribution that is too small, as it could be larger due to contributions from other \(\beta^{+}\)-decaying isotopes. \(y_{i}\) and \(y_{i,fit}\) are the observed and fit values of the flux in a given bin, and \(\sigma_{i}\) the uncertainty on the flux in that bin.
We find the minimum \(\chi^{2}\) by varying the disk height parameter within the range \(0.01\leq\epsilon\leq 0.03\), and also varying the normalization of the disk flux. Below, we present our results in the plane of \(m_{\chi}\) and \(\langle\sigma_{mr}v\rangle\).
## V Results
Table 2 shows the best fit \(\chi^{2}\), and the corresponding values of \(m_{\chi}\) and \(\langle\sigma_{mr}v\rangle\), for the four DM density profiles we consider, each for both annihilating DM and XDM. For three of the four profiles (both gNFW profiles and one Einasto profile), XDM provides a better fit than annihilating DM by \(\Delta\chi^{2}>16\). Assuming the test statistic is distributed as a chi-squared with one degree of freedom, this would correspond to a \(4\sigma\) improvement. The difference for the gNFW profiles is not surprising given the results of Ref. [26], which found that annihilating DM with a gNFW profile could not provide a good fit to the observed longitudinal profile. Ref. [26] also found that an Einasto profile (assuming again annihilating DM) could fit the observed excess much better than a gNFW profile could. While the Einasto profile of model B2 fits the data nearly as well with annihilating DM as with XDM, this is not true for the cuspier Einasto profile of model B1. For the B1 Einasto profile, the annihilating DM case is actually a worse fit than the annihilating B1 gNFW profile, and is significantly worse than any profile with XDM.
Figures 1 and 2 illustrate the difference between annihilation and XDM fits for the B2 gNFW model from Ref. [47]. We find that the case of annihilating DM is too cuspy to fit the data well, with a sharp peak at zero longitude. However, because for XDM the rate of electron-positron pair production is kinematically suppressed in the Galactic Center, the profile is less sharply peaked, and fits the observed data noticeably better.
In Fig. 3, we show contours corresponding to 1- and 2-\(\sigma\) best fit regions for the four different density profiles we consider. The filled (dark green) contours are for the
Figure 2: Same as Fig. 1, but showing the longitudinal profile. The lower normalization in the central bin compared to Fig. 1 is due to the longitudinal bins covering a larger solid angle, and thus the central bin being less dominated by the peak at the Galactic Center. As above, the slightly asymmetric shape is due only to the data being centered at -0.25 degrees.
Figure 1: The central latitudinal profile of 511-keV emission, with data provided by the authors of Ref. [55]. Blue and orange are the best fits using the gNFW2 profile, for exciting and annihilating DM, respectively. In both plots \(m_{\chi}=481\) GeV. The fits appear asymmetric only because the data are not centered quite on the Galactic Center, but rather at -0.25 degrees [9].
Einasto2 model, the one which gives the overall lowest \(\chi^{2}\). To estimate how the uncertainties in profile parameters (namely slope and scale radius) affect the position of these best fit contours, we also considered 12 other NFW profiles with parameters drawn from within the 2-\(\sigma\) contour shown in Ref. [47] for their B2 baryonic model. We included only profiles that fit the 511-keV data with \(\chi^{2}<70\). The gray region in Fig. 3 encompasses all of the corresponding 2-\(\sigma\) contours.
## VI Conclusions
We have studied the morphology of the 511-keV signal observed by SPI aboard the Integral satellite, performing the first statistical analysis to compare the fits of annihilating and exciting dark matter to this data. Previous work, which assumed annihilating dark matter, found that a generalized NFW profile is far too cuspy to fit the observed 511-keV data, and that an Einasto profile fit the data much better. On the other hand, we find that the ability of annihilating dark matter to fit the excess depends strongly on the parameters of the assumed density profile, and that an Einasto profile does not always give a better fit. Furthermore, we find that the fit for exciting dark matter is much less sensitive to the choice of profile than for annihilating dark matter, and in many cases is significantly better than the fit with annihilating dark matter. If one assumes that the Milky Way dark matter halo follows a generalized NFW profile, our results thus improve the viability of a dark matter explanation for the 511-keV excess. Our results also generally favor exciting dark matter over annihilating dark matter, which was already in tension with cosmological constraints on light particles.
Observations of other sources could help confirm this scenario. While the velocity dispersions in dwarf galaxies are likely too low to produce a signal from XDM, other galaxies and galaxy clusters offer a potential avenue for discovery. These remain out of reach of the sensitivity of INTEGRAL, but future experiments such as e-ASTROGAM [56] and COSI [57] may shed light on this scenario. Certain realisations of XDM may also be visible as inelastic dark matter in underground experiments. Although halo dark matter particles with \(\sim\)MeV mass splittings are out of kinematic reach, cosmic-ray boosted dark matter could be visible in heavy direct detection targets [58; 59; 60].
###### Acknowledgements.
We thank Thomas Siegert for providing us with the INTEGRAL/SPI data used in this work. This work is supported by the Natural Sciences and Engineering Research Council of Canada, the Arthur B. McDonald Canadian Astroparticle Physics Research Institute, the Canada Foundation for Innovation and the Province of Ontario via an Early Researcher Award. Research at Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science, and Economic Development, and by the Province of Ontario.
\begin{table}
\begin{tabular}{c c c c} \hline Model & \(\chi^{2}\) & \(m_{\chi}\) [GeV] & \(\langle\sigma_{mr}v\rangle\) [cm\({}^{3}\) s\({}^{-1}\)] \\ \hline \hline gNFW1, Ann & 79.9 & – & \(1.6\times 10^{-25}\left(m_{\chi}/\text{GeV}\right)^{2}\) \\ gNFW1, XDM & 62.4 & 637 & \(1.9\times 10^{-18}\) \\ \hline Einasto1, Ann & 87.7 & – & \(9.7\times 10^{-20}\left(m_{\chi}/\text{GeV}\right)^{2}\) \\ Einasto1, XDM & 65.6 & 486 & \(1.8\times 10^{-18}\) \\ \hline gNFW2, Ann & 99.8 & – & \(3.8\times 10^{-20}\left(m_{\chi}/\text{GeV}\right)^{2}\) \\ gNFW2, XDM & 63.7 & 481 & \(7.1\times 10^{-19}\) \\ \hline Einasto2, Ann & 62.3 & – & \(2.0\times 10^{-25}\left(m_{\chi}/\text{GeV}\right)^{2}\) \\ Einasto2, XDM & 61.2 & 547 & \(1.5\times 10^{-18}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: \(\chi^{2}\) and best fit DM properties for the different DM profiles considered, for annihilating and exciting DM. For the annihilation case, \(m_{\chi}\) and \(\langle\sigma_{mr}v\rangle\) are degenerate, so we give the best fit value of \(\langle\sigma_{mr}v\rangle\) in terms of \(m_{\chi}\) in the last column.
Figure 3: 1 and 2\(\sigma\) contours for XDM properties with the four density profiles we consider. For the Einasto2 model, which produces the best overall fit, we show filled contours. For the other models, we show unfilled curves. The gray region bounds the best fit region for these plus 12 other density profiles, showing the range of XDM models that could yield the observed 511 keV signal (see text). | |
2306.02741 | ZIGNeRF: Zero-shot 3D Scene Representation with Invertible Generative
Neural Radiance Fields | Generative Neural Radiance Fields (NeRFs) have demonstrated remarkable
proficiency in synthesizing multi-view images by learning the distribution of a
set of unposed images. Despite the aptitude of existing generative NeRFs in
generating 3D-consistent high-quality random samples within data distribution,
the creation of a 3D representation of a singular input image remains a
formidable challenge. In this manuscript, we introduce ZIGNeRF, an innovative
model that executes zero-shot Generative Adversarial Network (GAN) inversion
for the generation of multi-view images from a single out-of-domain image. The
model is underpinned by a novel inverter that maps out-of-domain images into
the latent code of the generator manifold. Notably, ZIGNeRF is capable of
disentangling the object from the background and executing 3D operations such
as 360-degree rotation or depth and horizontal translation. The efficacy of our
model is validated using multiple real-image datasets: Cats, AFHQ, CelebA,
CelebA-HQ, and CompCars. | Kanghyeok Ko, Minhyeok Lee | 2023-06-05T09:41:51 | http://arxiv.org/abs/2306.02741v1 | # ZIGNeRF: Zero-shot 3D Scene Representation
###### Abstract
Generative Neural Radiance Fields (NeRFs) have demonstrated remarkable proficiency in synthesizing multi-view images by learning the distribution of a set of unposed images. Despite the aptitude of existing generative NeRFs in generating 3D-consistent high-quality random samples within data distribution, the creation of a 3D representation of a singular input image remains a formidable challenge. In this manuscript, we introduce ZIGNeRF, an innovative model that executes zero-shot Generative Adversarial Network (GAN) inversion for the generation of multi-view images from a single out-of-domain image. The model is underpinned by a novel inverter that maps out-of-domain images into the latent code of the generator manifold. Notably, ZIGNeRF is capable of disentangling the object from the background and executing 3D operations such as 360-degree rotation or depth and horizontal translation. The efficacy of our model is validated using multiple real-image datasets: Cats, AFHQ, CelebA, CelebA-HQ, and CompCars.
## 1 Introduction
The remarkable success of generative adversarial networks (GANs) [8] has spurred significant advancements in realistic image generation with high quality. Particularly, following the emergence of StyleGAN [17], numerous 2D-based generative adversarial network models have benefited from a deeper understanding of latent spaces [16, 18]. Consequently, various computer vision tasks, such as conditional image generation and style transfer [13, 20], have shown substantial progress. However, 2D-based image generation models are constrained in their ability to generate novel view images due to their limited understanding of the underlying 3D geometry of real-world scenes.
To overcome this challenge, several studies have adopted the neural radiance field (NeRF) [24] approach, which encodes a scene into a multi-layer perceptron (MLP) to provide 3D rendering. Although conventional NeRF [24] has successfully facilitated the development of 3D-aware models and reduced computational costs in novel view synthesis tasks, it remains impractical to train a model overfitted to a single scene with multi-view images [24, 47]. Consequently, various studies have extended NeRF by integrating it with generative models, i.e., generative NeRF. Generative NeRF [1, 2, 6, 9, 27, 28, 34] models can be trained on unposed real-world images, whereas conventional NeRF necessitates multiple images of a single scene [37, 39, 44]. Moreover, generative NeRF has been employed for obtaining conditional samples through techniques such as class label information [14] or text encoding [7, 29, 30, 41].
Despite the convenience and intuitiveness of these approaches, they possess limitations in image editing and generating 3D representations of specific inputs, such as out-of-domain images or real-world images. To enable more practical applications, generative NeRF models have also
incorporated GAN inversion techniques [31, 32, 38, 50] for the 3D representation of particular input images, including out-of-distribution or real-world images. However, previous studies have faced a constraint that necessitates fine-tuning on pre-trained models for specific images [19, 21, 42, 46]. This requirement hinders the application of these models to numerous real samples simultaneously and renders the process time-inefficient, as it demands extensive fine-tuning.
In this study, we propose a novel zero-shot methodology for the generation of multi-view images, derived from input images unseen during the training process. This approach leverages a 3D-aware GAN inversion technique. Notably, our model proffers 3D-consistent renderings of unposed real images during inference, eliminating the need for supplementary fine-tuning.
Our architectural design bifurcates into two distinct components: the 3D-generation module and the 3D-aware GAN inversion module. The former is founded on the principles of GIRAFFE [27], which successfully amalgamates the compositional attributes of 3D real-world scenes into a generative framework. To enhance the precision of 3D real-world reconstruction and improve image quality, we introduce modifications to the GIRAFFE module, specifically in the decoder and neural renderer. The 3D-aware GAN inverter, on the other hand, is trained with images synthesized from the generator. This strategic approach enables the inverter to accurately map the input image onto the generator's manifold, regardless of the objects' pose. Example results of our model is displayed in Fig. 1.
Figure 1: **Demonstration of the 3D reconstruction results employing our proposed method, ZIGNeRF.** This illustration depicts the successful zero-shot 3D GAN inversion across various real-world image datasets [5, 15, 45].
We subject our model to rigorous evaluation, utilizing five diverse datasets: Cats, CelebA, CelebA-HQ, AFHQ, and CompCars. Additionally, we demonstrate the model's robustness by inputting FFHQ images into a model trained on CelebA-HQ. The primary contributions of this work are as follows:
* We present ZIGNeRF, a pioneering approach that delivers a 3D-consistent representation of real-world images via zero-shot estimation of latent codes. To our knowledge, this is the first instance of such an approach in the field.
* ZIGNeRF exhibits robust 3D feature extraction capabilities and remarkable controllability with respect to input images. Our model can perform 3D operations, such as a full 360-degree rotation of real-world car images, a feat not fully achieved by many existing generative NeRF models.
## 2 Related Work
### Neural Radiance Field (NeRF)
NeRF is an influential method for synthesizing photorealistic 3D scenes from 2D images. It represents a 3D scene as a continuous function using a multi-layer perceptron (MLP) that maps spatial coordinates to RGB and density values, and then generates novel view images through conventional volume rendering techniques. Consequently, NeRF significantly reduces computational costs compared to existing voxel-based 3D scene representation models [11; 26; 35; 37; 49]. However, the training method of NeRF, which overfits a single model to a single scene, considerably restricts its applicability and necessitates multiple structured training images, including camera viewpoints [3; 37].
### Generative NeRF
Generative NeRFs optimize networks to learn the mapping from latent code to 3D scene representation, given a set of unposed 2D image collections rather than using multi-view supervised images with ground truth camera poses. Early attempts, such as GRAF [34] and pi-GAN [2], demonstrated promising results and established the foundation for further research in the generative NeRF domain. Recent works on generative NeRF have concentrated on generating high-resolution 3D-consistent images. The recently proposed StyleNeRF [9] successfully generates high-resolution images by integrating NeRF into a style-based generator, while EG3D [1] exhibits impressive results with a hybrid architecture that improves computational efficiency and image quality.
However, real-life applications frequently necessitate conditional samples that exhibit the desired attribute rather than random samples in data distribution. We adopt GAN inversion as a conditional method, as opposed to class-based or text encoding conditional methods, which are prevalent in 2D generative models [4]. The aforementioned conditional generation techniques, such as class-based or text encoding methods, possess limitations. Firstly, the training dataset must include conditional information, such as labels or text corresponding to each sample. Secondly, they cannot provide 3D representation of real-world images as conditional input. We address these limitations in existing conditional generative NeRF models by introducing GAN Inversion into generative NeRF for conditional generation.
### 3D aware GAN inversion
With the remarkable progress of GANs, numerous studies have endeavoured to understand and explore their latent space to manipulate the latent code meaningfully. GAN inversion represents the inverse process of the generator in GANs. Its primary objective is to obtain the latent code by mapping a given image to the generator's latent space. Ideally, the latent code optimized with GAN inversion can accurately reconstruct an image generated from the pre-trained generator. The output sample can be manipulated by exploring meaningful directions in the latent space [36]. Moreover, real-world images can be manipulated in the latent space using GAN inversion.
Several studies have investigated 3D GAN inversion with generative NeRF to generate multi-view images of input samples and edit the samples in 3D manifolds. Most previous works fine-tuned the pre-trained generator due to the utilization of optimization-based GAN inversion methods. However,
additional steps for fine-tuning the generator for GAN inversion impose limitations in terms of adaptability and computational costs.
In this paper, we propose a novel inverter for 3D-aware zero-shot GAN inversion. The proposed inverter can map out-of-domain images into the latent space of the generator. Our model can generate 3D representations of real-world images without requiring additional training steps. The proposed 3D-aware zero-shot GAN inversion maximizes applicability since the trained model can be directly applied to out-of-domain images.
## 3 Method
This work seeks to generate multi-view images from an out-of-domain image by combining generative NeRF with GAN inversion. The proposed method, graphically delineated in Fig. 2, encompasses two distinct phases: the 3D-generation segment and the 3D-aware inverter. The first phase involves training the 3D-generation component, an architecture based on GIRAFFE, augmented by enhancements in the neural renderer and the discriminator modules to fortify and expedite the training process. In the second phase, the 3D-aware inverter is trained with the pre-trained generator. The novel inverter is designed to transform out-of-domain images into latent codes within the generator's latent space. Consequently, the generator can produce multi-view images of the out-of-domain image using the latent code derived from the inverter. Throughout the training of the inverter, we utilize the images generated from the generator, imbued with 3D information, as the training dataset. At test time, the inverter executes zero-shot inversion on real-world images, obviating the need for additional fine-tuning for unseen images. The proposed method thereby holds great promise for generating 3D-consistent multi-view images from real-world input images.
### 3D Generation
**Compositional Generative Neural Feature Field.** Our 3D-generator represents a scene with a compositional generative neural feature field, a continuous function inherited from GIRAFFE, to represent a scene. This is essentially a combination of feature fields, each representing an object in a single scene, with the background also considered an object. In the 3D-generator, a 3D location,
Figure 2: **The comprehensive architecture of ZIGNeRF.** The 3D generative component is trained to produce photorealistic images consistent with 3D structures by mapping the latent code and camera pose to a synthetic image. Subsequently, the inverter is trained in conjunction with the pre-trained generator and discriminator.
\(\mathbf{x}\in\ \mathbb{R}^{3}\), a viewing direction, \(\mathbf{d}\in\ \mathbb{S}^{2}\), and latent code, \(\mathbf{z}\sim\mathcal{N}(0,1)\), are mapped to a volume density \(\sigma\in\ \mathbb{R}^{+}\) and a high-dimensional feature field \(\mathbf{f}\in\ \mathbb{R}^{M_{f}}\), rather than RGB colour \(\mathbf{c}\in\ \mathbb{R}^{3}\).
Affine transformation is applied to objects in the scene so that each object can be controlled in terms of poses, which include scale, translation, and rotation:
\[T=\{\mathbf{s},\mathbf{t},\mathbf{R}\}\,, \tag{1}\]
where \(\mathbf{s}\), \(\mathbf{t}\in\ \mathbb{S}\) indicate scale and translation parameters, respectively, and \(\mathbf{R}\in\) SO(3) determine rotation. The affine transformation enables object-level control by generating the bounding box corresponding to T of a single object:
\[\tau=\mathbf{R}\cdot\mathbf{s}\mathbf{I}\cdot+\mathbf{t}, \tag{2}\]
where \(\mathbf{I}\) is the \(3\times 3\) identity matrix. Compositional generative neural feature field is parameterized with an MLP as follows:
\[C((\sigma_{i},\mathbf{f}_{i})_{i=1}^{N}) =C(f_{oi}(\gamma(\tau^{-1}(\mathbf{x})),\gamma(\tau^{-1}(\mathbf{ d})),\mathbf{z}_{i})_{i=1}^{N}), \tag{3}\] \[z =[\mathbf{z_{s}^{1}},\mathbf{z_{a}^{1}},...,\mathbf{z_{s}^{N}}, \mathbf{z_{a}^{N}}], \tag{4}\]
where \(\gamma\left(\cdot\right)\) is positional encoding function [24], which is applied separately to x and d, and \(C\left(\cdot\right)\) is the compositional operator that composites feature field from the N-1 objects and a background. We then volume render the composited volume density and feature field rather than directly output the final image. 2D-feature map, which is fed into neural renderer for final synthesized output, is attained by volume rendering function \(\pi_{v}\),
\[\pi_{v}(C(\sigma,\mathbf{f}))=\mathbf{F}. \tag{5}\]
Neural renderer with residual networks.Our model outputs final synthetic image with neural rendering on the output feature map of volume rendering. We observe that the original neural renderer of GIRAFFE does not preserve the feature well. Furthermore, the learning rate of the decoder and the neural renderer is not synchronized; hence the training of the generator is unstable.
We improve the simple and unstable neural renderer of GIRAFFE. Our neural renderer replaces 3x3 convolution layer blocks with residual blocks [10] and employs the ReLU activation rather than leaky ReLU activation [43] for faster and more effective rendering. To stabilize the neural rendering, we adopt spectral normalization [25] as weight normalization. We experimentally verify that the modified neural renderer improves the stability of the training and the quality of the outputs. Our neural renderer, which maps the feature map F to the final image \(\hat{I}\in\ \mathbb{R}^{H\times W\times 3}\), is parameterized as:
\[\pi_{\theta}(\mathbf{F})=\mathbf{\hat{I}}. \tag{6}\]
Discriminator.As the vanilla GAN [8], the discriminator outputs probability, which indicates whether the input image is real or fake. We replace the 2D CNN-based discriminator with residual blocks employing spectral normalization as weight normalization.
Objectives.The overall objective function of the 3D-generative part is:
\[L_{\text{G, D}}=L_{\text{GAN}}+\lambda L_{\text{R1}}, \tag{7}\]
where \(\lambda\) = 10. We use GAN objective [8] with R1 gradient penalty [23] to optimize the network.
### 3D-aware Invertor
To invert a given image into latent codes within the generator's latent space, we introduce a novel inverter. This inverter is designed by stacking the residual encoder block with ReLU activations, as depicted in Fig. 3. Four linear output layers are situated at the culmination of the inverter to facilitate output. These residual blocks extract the feature of the input image, and each linear output layer estimates the \(\mathbf{z_{s}^{obj}}\), \(\mathbf{z_{a}^{obj}}\), \(\mathbf{z_{s}^{bg}}\), \(\mathbf{z_{a}^{bg}}\) of the input image.
The challenge of 3D-aware GAN inversion involves mapping multi-view images of a single object into a unique latent code. To construct a 3D-aware inverter, we opt to use the synthesized image \(\hat{\mathbf{I}}\) as the training data. Given that we already possess the source parameters of the generated image, the inverter solely estimates the latent code \(\mathbf{z^{predict}}\) of the input image. The generated training images equip the inverter to extract the feature of unseen images, which vary in viewing direction, scale, and rotation. Following the latent code inference, the pre-trained generator reconstructs the input image using \(\mathbf{z^{predict}}\) and source parameters, which include camera pose, \(\boldsymbol{\xi^{\text{source}}}\), and compositional parameter, \(\mathbf{T^{source}}=\{\mathbf{s},\mathbf{t},\mathbf{R}\}\):
\[I_{\theta}(\hat{\mathbf{I}})=\mathbf{z^{predict}}, \tag{8}\]
\[G_{\theta}(\mathbf{z^{predict}},\mathbf{T^{source}},\boldsymbol{\xi^{\text{ source}}})=\hat{\mathbf{I}}^{\text{reconst}}. \tag{9}\]
As the inverter learns to estimate the latent source code, we found that the L1 loss between the two latent codes in latent space was inadequate for reconstructing the scene. Thus, we opted to employ GAN loss and L1 as an image-level loss to generate a plausible image. In addition, we incorporated two perceptual losses, namely the Structural Similarity Index Measure (SSIM) [40] and the Learned Perceptual Image Patch (LPIPS) [48] loss, to conserve the fine details of the source image. The inverter can be optimized using the following function:
\[L_{I} =L_{\text{GAN}}(\hat{\mathbf{I}}^{\text{predict}})+\lambda_{1}L_{ \text{latent}}(\mathbf{z^{source}},\mathbf{z^{predict}})\] \[+\lambda_{2}L_{\text{reconst}}(\hat{\mathbf{I}}^{\text{source}}, \hat{\mathbf{I}}^{\text{predict}})+\lambda_{3}L_{\text{percept}}(\hat{ \mathbf{I}}^{\text{source}},\hat{\mathbf{I}}^{\text{predict}}),\]
where \(\hat{\mathbf{I}}^{\text{predict}}\) indicates the image reconstructed by the pre-trained generator using \(\mathbf{z^{predict}}\). \(L_{\text{latent}}\) and \(L_{\text{reconst}}\) represent latent-level and image-level loss, respectively, both utilizing L1 loss. \(L_{\text{percept}}\) signifies image-level perceptual loss, employing the LPIPS loss and SSIM loss.
### Training specifications
During the training phase, we randomly sample the latent codes \(\mathbf{z_{a}},\mathbf{z_{a}}\sim\mathcal{N}(\mathbf{0},\mathbf{1})\), and a camera pose \(\boldsymbol{\xi}\sim p_{\xi}\). The parameters \(\lambda_{1}\), \(\lambda_{2}\), and \(\lambda_{3}\) are set to 10, 100, and 1, respectively, for training the inverter. The model is optimized using the RMSProp optimizer [33], with learning rates of \(1\times 10^{4}\), 7 \(\times\)\(10^{5}\), and 1 \(\times\)\(10^{4}\) for the generator, the discriminator, and the inverter, respectively. We utilize a batch size of 32. For the first 100,000 iterations, the generator and the discriminator are trained, and the inverter is trained for the next 50,000 iterations. During the training process of the inverter, the generator and the discriminator remain frozen.
Figure 3: **Schematic representation of the architecture of the inverter deployed in ZIGNeRF.**
## 4 Experiments
ZIGNeRF is evaluated concerning zero-shot feature extraction, 3D controllability, and adaptability. We test on five real-world datasets: Cats, AFHQ [5], CelebA [22], CelebA-HQ [15], and CompCar [45]. An additional dataset, FFHQ [17], is used to demonstrate the robust adaptation capabilities of the proposed model. All input images shown in this section were not used during the training process, thereby validating the zero-shot 3D GAN inversion with unseen images. We commence with a visual validation of the proposed model, examining the similarity between the input image and the reconstructed images and 3D-consistent controllability. The model is then evaluated using Frechet Inception Distance (FID) [12] as a metric. We conclude with ablation studies to validate the efficacy of the loss function in optimizing the inverter.
### Controllable 3D Scene Reconstruction
We visually demonstrate that our proposed model generates multi-view consistent images corresponding to the input image. Fig. 4 showcases 3D reconstruction on CelebA-HQ [15] and AFHQ [5], substantiating that the inverter successfully extracts facial features irrespective of gender or skin colour in human faces, and species in animal faces. Fig. 5 exhibits the model's controllability and object disentanglement with CompCar [45], indicating that the inverter estimates the latent code of the object and background effectively. Notably, the proposed model can facilitate 3D-consistent 360-degree rotation, a common limitation of generative NeRF methods. We further attest to the robustness of our model by applying it to FFHQ, as shown in Fig. 6.
Figure 4: **Display of \(256^{2}\) multi-view synthesis applied to facial datasets: CelebA-HQ [15] and AFHQ [5].**
Figure 5: **Visualisation of reconstructed images based on an input car image [45], following compositional operations. These illustrations highlight the effective disentanglement of the object from the background and the provision of 3D controllability.**
### Quantitative Evaluation
To thoroughly evaluate the efficacy of our proposed model, ZIGNeRF, we conduct experiments in both conditional and unconditional generation modes. The evaluation process involves a random sampling of 20,000 real images alongside 20,000 synthesized images, which is a conventional method to compare generative models. The results are displayed in Tab. 1.
In the context of the unconditional model, we generate samples using random latent codes. The training process entails 100,000 iterations. Notably, our model, ZIGNeRF, significantly outperforms the baseline GIRAFFE [27] model. As an illustration, for the CelebA(HQ) \(256^{2}\) dataset, ZIGNeRF achieves a score of 14.98, substantially lower than the GIRAFFE's score of 23.14. This is indicative of the model's ability to produce higher-quality images with fewer iterations.
Turning to the conditional synthesis, the latent codes estimated by the inverter are employed on randomly sampled real images. The training process for the generator is conducted over 100,000 iterations, while the inverter training comprises 50,000 iterations, during which the generator is kept static. When compared to GIRAFFE, ZIGNeRF demonstrate superior performance in conditional samples as well. For instance, in the AFHQ \(128^{2}\) dataset, our model attains a score of 14.02, marking a significant improvement over the GIRAFFE's score of 35.03.
### Ablation study
In the interest of validating the loss function deployed in training the inverter, we undertake an ablation study. The study scrutinizes the necessity of each loss component: latent loss, reconstruction loss, GAN loss, and perceptual loss. The imperative nature of each loss function is demonstrated through its incremental addition to the naive model, which is trained solely via latent code comparison. Fig. 7 illustrates the individual contribution of each loss function. It is observed that the naive model exhibits limited capability in reconstructing the input image. The reconstruction loss \(L_{\text{reconst}}\) aligns the reconstructed image with the input at an image-level. The GAN loss \(L_{\text{GAN}}\) is observed to enhance the realism of the reconstructed image, independent of improving the input-reconstructed image
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Models} & \multicolumn{2}{c}{Cats} & \multicolumn{2}{c}{CelebA(HQ)} & \multicolumn{2}{c}{CompCar} & \multicolumn{2}{c}{AFHQ} \\ & & \(128^{2}\) & \(256^{2}\) & \(128^{2}\) & \(256^{2}\) & \(128^{2}\) & \(256^{2}\) & \(128^{2}\) & \(256^{2}\) \\ \hline \hline \multirow{2}{*}{Unconditional} & GIRAFFE & 24.01 & 21.28 & 19.45 & 23.14 & 38.91 & 40.84 & 35.03 & 38.18 \\ & ZIGNeRF(ours) & **12.31** & **11.21** & **11.01** & **14.98** & **22.67** & **22.57** & **12.81** & **19.96** \\ \hline Conditional & ZIGNeRF(ours) & **15.06** & **16.83** & **14.77** & **25.66** & **25.97** & **25.41** & **14.02** & **28.78** \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Comparative analysis of the FID between our proposed ZIGNeRF and a baseline model.** The models were trained on four distinct datasets with the resolution of \(128^{2}\) and \(256^{2}\).
Figure 6: **Presentation of 256\({}^{2}\) synthesized images conditioned on input FFHQ [17] images, produced by the model trained on the CelebA-HQ dataset [15].**
similarity. The full model elucidates that the perceptual loss \(L_{\text{percept}}\) plays a pivotal role in refining the expression of minute attributes, skin colour, and texture.
## 5 Conclusion
In this paper, we have proposed ZIGNeRF, an innovative technique that manifests a 3D representation of real-world images by infusing a 3D-aware zero-shot GAN inversion into generative NeRF. Our inverter is meticulously designed to map an input image onto a latent manifold, a learning process undertaken by the generator. During testing, our model generates a 3D reconstructed scene from a 2D real-world image, employing a latent code ascertained from the inverter. Rigorous experiments conducted with four distinct datasets substantiate that the inverter adeptly extracts features of input images with varying poses, thereby verifying the 3D controllability and immediate adaptation capabilities of our model.
Our novel approach carries the potential for wide application, given that our pipeline can be generally applied to other existing generative NeRFs. It is worth noting that this zero-shot approach is a pioneering contribution to the field, bringing forth a paradigm shift in 3D image representation. In future work, we envisage extending the proposed method by manipulating the inverted latent code for editing the input image, thereby further enhancing the capabilities of this innovative model.
Figure 7: **Ablation study of the loss functions employed in the training of the inverter within ZIGNeRF.** | 生成型ニューラルな光合成場 (NeRF) は、多くの観点の画像を合成するための学習された画像の分布を反映することで、 Remarkable proficiency を示しています。既存の生成型NeRF は、データ分布内での3D整合性の高い高品質なランダムサンプルを生成することに対して、その能力は高く、しかし、単一の入力画像の3D表現の生成は依然として困難な課題となっています。本稿では、ZIGNeRFという革新的モデルを導入します。これは、入力画像を生成するゼロショット生成的アド versarialネットワーク (GAN) の逆転を用いて、単一のアウトオブドメイン画像から多視点画像を生成するモデルです。このモデルは、アウトオブドメイン画像を生成器の潜在コードにマップする革新的なインバータに基づいています。特に、ZIGNeRFは、オブジェクトを背景から分離し、3D操作を実行する能力を持つ |
2306.13894 | OUXT Polaris: Autonomous Navigation System for the 2022 Maritime RobotX
Challenge | OUXT-Polaris has been developing an autonomous navigation system by
participating in the Maritime RobotX Challenge 2014, 2016, and 2018. In this
paper, we describe the improvement of the previous vessel system. We also
indicate the advantage of the improved design. Moreover, we describe the
developing method under Covid-19 using simulation / miniture-size hardware and
the feature components for the next RobotX Challenge. | Kenta Okamoto, Akihisa Nagata, Kyoma Arai, Yusei Nagao, Tatsuki Nishimura, Kento Hirogaki, Shunya Tanaka, Masato Kobayashi, Tatsuya Sanada, Masaya Kataoka | 2023-06-24T07:57:42 | http://arxiv.org/abs/2306.13894v1 | # OUXT Polaris: Autonomous Navigation System for the 2022 Maritime RobotX Challenge
###### Abstract
OUXT-Polaris has been developing an autonomous navigation system by participating in the Maritime RobotX Challenge 2014, 2016, and 2018. In this paper, we describe the improvement of the previous vessel system. We also indicate the advantage of the improved design. Moreover, we describe the developing method under Covid-19 using simulation / miniature hardware and the feature components for the next RobotX Challenge.
Maritime systems, Robotics, Unmanned surface vehicle
## I Introduction
First of all, we are motivated to develop a big field robot in a large area such as the ocean. In recent years, the aging and shrinking population, as well as a shortage of workers, has led to an increase in demand for the automation of cars, robots, and other equipment. Among these, automated driving is being developed with particular emphasis. Moving the autonomous vehicle or robot outside has a very severe problem. They need to hedge unknown obstacles and go to the target position. The environment such as weather, temperature, or underwater around robots causes sensor and hardware problems. There are each challenging problems and They are also interesting for us, and there are different problems between land and ocean. On the land, the navigation or the estimation of the self-position is solved by the point cloud map and the odmetry, while on the ocean, the point cloud and the odmetry is not obtained enough. So the robots need to estimate the self-position using GPS and IMU sensors. Moreover, on the land, the position of the target objects and obstacles is obtained from Lidar data. On the other hand in the sea, waves disturbe to get the target positions. In that case, the robots have to fusion multiple data such as cameras and Lidars. In this competition, we have a chance to develop a system to get over the wild environment for the robots on the ocean. Therefore, we are participating in the Maritime RobotX Challenge. As shown in Figs. 1, and 2, we designed the architecture of the vessel navigation system. Our vessel navigation systems are composed of localization, perception, behavior, planning, and control. Our localization, behavior, and planning methods are based on classical methods, such as "Extended Kalman Filter", "Behavior Tree", "Cubic Hermite Spline" and "Velocity Control based on WAM-V Dynamics Model" Our perception methods are based on learning methods, such as "YOLOX". We belielfly describe the developed vessel navigation system as follows.
1. _Localization_: The position and the velocity of WAM-V are estimated using the 6DoF Extended Kalman Filter [1] from the data of the GNSS and IMU sensors.
2. _Perception_: The obstacles and task objects are recognized from lidar and camera data. We used YOLOX for object Detection of task object information such as buoys and docks.
3. _Behavior_: Using behavior tree can build WAM-V behaviors like a tree. We use Groot 19, GUI tools for designing behavior tree for smooth development.
4. _Planning_: Path planning generates obstacle-avoidable paths from sensors and WAM-V information in real-time.
5. _Control_: In the servo and thruster controllers, the servo motor direction and the thruster revolution to achieve the target velocity are calculated based on the vessel motion model.
## III Hardware Developments
### _Redesign of Azimuth Thruster_
We considered the following three issues when designing the propulsion mechanism.
First, we considered it important for the boat to be able to generate lateral propulsive force to complete the docking task. When propulsion units are mounted on the two aft sections of the boat, each propulsion unit must have a degree of freedom in the yaw axis to generate thrust in any direction in the horizontal plane. This propulsion system is generally called an azimuth thruster.
Next, the electric outboard motors that are commonly available have a circular cross-section for the mounting shaft, so it is necessary to find a way to fix the shaft tightly.
In addition, the propeller must avoid contact with the seafloor when the vessel needs to navigate in shallow water, such as when launching the boat on the course. Since it is dangerous for a person to enter shallow water and lift the thruster with a tool, a mechanism that can easily raise and lower the thruster was necessary.
To meet these requirements, we designed the mechanism shown in Fig. 3. The three functions of gripping, rotating, and elevating are integrated into a single unit.
The gripping function was realized using a PLA plastic collet manufactured by a 3D printer. The collet is pushed axially by a screw into an aluminum hollow shaft with a wedge-shaped cross section to enable strong shaft gripping.
The azimuth mechanism is realized by transmitting the rotational force from the servo motor (XM540-W270, Dynamixel) to the hollow shaft by spur gears. The hollow shaft is held at two points by angular bearings.
### _Sensor Arrangement_
LiDAR and a visible light camera are used for environmental awareness. Their arrangement is shown in the Fig. 4.
In total, 6 cameras and 4 LiDARs are used. Different LiDARs are used for the front and rear views and for the left and right views. The VLP-16 from Velodyne Lidar is used for the front/rear view to provide a wide range of vision, mainly in the direction of boat travel, and the MID-70 from Livox is selected for the left/right view to see the docking bay near the hull in the docking task. For the cameras, a module with an IMX-219 image sensor and a lens with a 120-degree diagonal field of view was used. This allows the acquisition of point cloud information and visible light images covering 360 degrees around the boat.
### _The Perception Array_
To perform point cloud fusion using a visible light camera and LiDAR, it is necessary to accurately calibrate the relative positions of the sensors. Therefore, once the sensors are assembled on the hull, they cannot be easily removed for testing on land.
To solve this problem, we have developed a sensor unit that consists of a LiDAR and two cameras fixed to a rigid frame and can be mounted on or carried by various robots while maintaining the accuracy of the relative positioning between the sensors. We call it a perception array.
The foreground and background views of the designed perception array are shown in Fig. 5
Fig. 4: Sensor Arrangement
Fig. 3: Design of Azimuth Thruster
and Fig. 6, respectively.
The cameras are mounted on the left and right sides, and the LiDAR is mounted upside down at the bottom. The box in the center contains the power supply function and the switching hub. This configuration is shown in the Fig. 7.
### _The Perception Camera_
The camera, which is part of the Perception Array, is designed to perform edge-processing image recognition and is equipped with a Jetson Nano from Nvidia as the computing system. The camera is designed to be waterproof and heat-dissipating for use in various weather conditions. The system diagram is shown in Fig. 8.
The front and rear hatches can be opened and closed without tools. Fig. 9 shows these hatches opened.
### _MINI-V in the COVID-19_
MINI-V(minitua vessel) was created in order to test the software easily in the Covid-19. Over the past several years, we couldn't conduct the experiment on the ocean or lake because of the COVID-19. We were prohibited to meet and create the parts of WAM-V. In addition, the law about vessels is very strict. So, we can't float the boat easily. The WAM-V is so big and it is hard work and costs too much to carry WAM-V to the lake. Then, we need a sustainable system to develop the automotive vessel. As mentioned above, the simulator is used for developing navigation systems, and it doesn't need to use WAM-V. The perception array was created to get the sensor data for software tests. They made it easier for us to develop software without ships. However, the software and hardware integration is the most important to conduct tasks. Then, MINI-V was created to make it easier to do the test and the integration.
The concepts of MINI-V are follows:
1. easy assembly, transport, and experiment,
2. open source software and hardware,
3. high compatibility between WAM-V and MINI-V.
Fig. 5: Front View of Perception Array
Fig. 10: Hardware Components of MINI-V
Fig. 6: Rear View of Perception Array
Fig. 7: Diagram of Perception Array
Fig. 9: Disassembled diagram of the Perception Camera
MINI-V is created to be easy to carry, and we can carry them by suitcase like Fig. 10. It is so small that we can float and test on the buthtab. It is also assembled simply. We develop this vessel on open source. So, other people can play or test their software with MINI-V. Finally, we expect the high compatibility between WAM-V and MINI-V, and it will make it easy to migrate developed software on MINI-V toWAM-V. However, MINI-V have not had complete compatibility yet. We have future tasks to create a little bigger vessel to have compatible hardware and software such as batteries and sensors, and so on.
### _the prototype of the multicopter_
In the RobotX 2022, the tasks about multicopter are add on the competition. The drone needs to be automated and pick up the task objects. We created the quadcopter below in Fig.12. The flight controller, Pixacer Pro[2], is introduced to controll the drone. In order to get the drone position, it has GNSS sensor on the plane. we conduct the test of estimating the self-position of the drone. As you can see Fig.13 the drone can estimate the self-position With an error of \(\pm 0.5\mathrm{m}\) approximately. At the moment, the drone is manually controled by a pilot but not automated. For automation, we need to add computers such as raspberryPi to send the operation into flight controller. Moreover, the sensor such as cameras to recognize the task objects are needed. There are many considerations such as the roter-size, motor power, the size of body, and so on. We are going to develop the automated navigation system for drone in next years.
### _Emergency Stop System_
There are four emergency stop switches on the outside of the hull, two of the normal switch type and one wireless one. When any one of these switches is turned on, the propulsion system is turned off and the ship comes to a safe stop.
Fig. 11: Experiment on Ai River
Fig. 14: Emergency Stop System Diagram
Fig. 13: Estimated Drone Position and Track
In a normal design, a relay used in such a case would drain about 10 W from its coil alone, which is a very high cost for OUXT-Polaris, which does not have sufficient battery capacity. To avoid this, we designed a circuit that would function reliably as a kill switch when necessary while reducing the power consumption of the relay coil. Specifically, we selected a relay that applies voltage only at the moment it is turned on and consumes little power at other times, and designed a circuit to accommodate this. This design achieved a significant reduction in power consumption and greatly extended the cruising time. In addition, a latch circuit was introduced so that only momentary switches can be operated. This eliminates the possibility that the emergency stop switch, once pressed, would automatically deactivate the emergency stop status in the unlikely event that the contacts are unexpectedly removed.
## IV Software Developments
### _ROS2-based Autonomous Navigation Stack_
In "Maritime RobotX Challenge 2018", we used Robot Operating System 1(ROS1) for developing software. However, the development of ROS1 was finished with python2 end of life. Therefore, we adopted the next generation of ROS called "ROS2". [3] As shown in Fig. 1, our ROS2-based simulation and software system was already developed. Our software contributions are listed as follows.
* _Software System_ : We rebuilt the software system from ROS1 to ROS2
* _Behavior Tree_: We adopted the behavior tree library in ROS2 and built our original behavior tree.
* _Camera LiDAR object detection_: We are developing a lidar-camera fusion object detection system for this project.
* _Simulation Tool Development_ : We developed LiDAR simulation by using intel ray-tracing OSS "Embree".
* _Infrastructure_ We developed some automation tools for develop quickly.
We published all codes in GitHub to give feedback knowledge to the ROS community and Open-Source all our resources not only software [5] but also CAD models, and circuit data. [6]
### _Software System Architecture_
Our navigation stack is based on ROS2, but we do not use the navigation2 library. We develop our original software. Our software is highly modularized, so some of our team members use our stacks in other autonomous mobility competitions. (Fig.17,Fig. 18)
### _Behavior Tree_
Using behavior tree can build WAM-V behaviors like a tree. We use Groot 19, GUI tools for designing behavior tree for smooth development.
For example, in the WAM-V Dynamic Qualifying Task, we defined a node that navigation and searches channel markers. In searching channel markers behavior, first, we got the information from the result of lidar-camera fusion object detection system. Information we can get from the system is objects label, probabilities, and bounding boxes. And we convert the information we can get from the system into information containing channel markers information we can use in robotx challenge contains such as labels and coordinates in the same coordinate system of WAM-V etc. After search channel markers, WAM-V starts navigation. We need to act the Node two times in the task. So we defined two same nodes and connected them by Sequence node. The sequence node is defined in the behavior tree. We made other Nodes, for example, move forward node, stop node and rotate around the buoy node. These nodes can realize many behavior patterns required in tasks.
### _Planner components_
We created hermite planner as the path planning module for WAM-V. Once the goal direction information from the behavior layer is obtained, the path shape is created using the Hermite curve in combination with the current position. The Hermite curve is a kind of parametric curve with continuous curvature, and its shape is determined by specifying a vector between two endpoints. The reason for adopting Hermite curves is that Catmull-Rom Spline, etc. can be created by connecting Hermite curves, so the creation function can be used repeatedly. And since WAM-V itself can turn on the spot, WAM-V can follow the path such that angular velocity is large,even if the Hermite planner is created. The components of hermite planner are explained sequentially. The local waypoint server calculates the distance between the destination and obstacles in the Fresnel coordinate system. If a contact is detected on the route, a route is created to reach the destination with a slight shift in the X and Y directions in the world coordinate system. For contact judgment in the Fresnel coordinate system, the nearest neighbor points between the Hermite curve and the 2D LaserScan obtained from the obstacle are calculated using the Newton method. From among them, the collision between WAM-V and the obstacle objects is judged to have occurred if Satisfying following equations.
\[0<t<1 \tag{1}\]
\[f(t)<width+margin \tag{2}\]
\[f(t)=at^{3}+bt^{2}+c^{2}+d \tag{3}\]
When the Hermite curve is determined, the calculation of the velocity constraint is performed using several velocity modules. The first is stop planner, which creates a speed commitment to stop at the destination end of the route. This speed module creates a speed commitment to slow down at a constant acceleration before heading to the destination. Second, there is an obstacle planner to perform a stop before obstacle objects. This constraint to stop before the obstacle uses the same method as stop planner. The third is a curve planner to prevent the angular velocity from exceeding a certain value. It creates a velocity constraint so that the angular velocity between points on the created Hermite curve does not become too large. Each of these three creates velocity constraints independently, integrates the information from each of them. And, the graph adjusts the velocity constraints, and searches the velocity. Creating a velocity plan on the route by deleting edges with too much acceleration as edges with increasing or decreasing velocity between them as nodes on each curve. Then, based on the created route and the ship's own self-position, the pure pursuit planner is used to follow the route. A contact point between the circle centered on the WAM-V's self-position and the path is created, and the angular velocity and speed up to that contact point is calculated. In addition, a straight line is created on the extension of the endpoint to approximate the endpoint so that the endpoint does not run out of control. Fig.21 is shown as an the part of planning results by the above path planning module.
Fig. 19: Groot Example
Fig. 20: Pipeline between Perception to Behavior
### _Camera LiDAR fusion_
We used YOLOX[7] for object Detection of task object information such as buoys and docks. We created annotation data based on images and videos obtained from past RobotX Challenges. We open our annotation data for speeding up future development for Maritime RobotX. [8] The result of training and inference is shown in Fig.22.
The image is part of a video recorded during our challenging navigation in the 2018 RobotX Challenge.[9] This model is based on the YOLOX-S network and converted into a tensort model. So, we can run an object detection model of more than 10 Hz in Jetson nano.
The point cloud from LiDAR preprocesses before fusion the camera image and the LiDAR point cloud. First, The point cloud from the LiDAR is filtered to remove outliers. Next, the filtered point cloud is downsampled to a 2D Laser Scan. Then, the object area is extracted from the 2D LaserScan. [10] The extraction algorithm determines the object area information based on whether the distance between the neighboring LaserScan is less than the threshold value determined adaptively. The clustered point cloud is projected on the camera images. And we will match the bounding 2D IOU and the object label on the images by Hungarian method.[11]
### _Simulation_
WAM-V has a car-sized hull, and the setup alone consumes an entire day to conduct the experiment. The development of a simulator is essential to speed up the development process. Therefore, we developed a simulator called navi_sim. The reason we did not use vxx is that vxx is not yet compatible with ROS2. navi_sim utilizes Embree [12], an open-source ray tracing library developed by Intel, to simulate lidar with fast CPU ray tracing. It also provides various other functions such as semantic information output, camera view simulation, and simple ship motion model simulation.
Blue rectangles in Fig. 24 is camera view simulation, and robot 3D models in 24 is simulated robot.
### _Infrastructure_
Our system runs in a complex distributed computing environment that includes microcontrollers, so the configuration file is huge and there are many processes that need to be launched to start the system. If such a system is deployed manually, various human errors are expected to occur. In addition, there are 77 ROS2 packages that make up our autonomous navigation system, and it is almost impossible to manage all of them manually without error. To solve these problems, we used a configuration management tool called ansible, ansible can support environment construction
Fig. 21: Part of Planning Results
Fig. 24: Running Our Navigation Stack with Navi Sim
Fig. 23: Algorithm of scan segmentation
Fig. 22: Inference
procedures on the computer in YAML format and can switch between development and production environments with a single option. With the adoption of ansible, setting up our system is just a matter of running a single shell script.
Also, even if the ROS2 package does not have any changes in the code of the package itself, the software may break if there are changes in the packages on which the package depends. To detect and fix this quickly, we built a CI system using GitHub Actions. CI is a technology called continuous integration, which detects commits to Github, etc., and automatically performs tests to find defects early. When a pull request is issued for any of the packages we manage, a build test is automatically performed once a day. If the build test fails, the pull request cannot be merged. Failed build tests are notified to the OUXT-Polaris Slack so that members know immediately if a failure has occurred. Whole architecture of CI/CD pipeline shown in Fig.25. The robot also consists of a single-board computer with an arm CPU as well as a CPU with x86-based architecture, and a microcontroller with an arm CPU and mbed OS. The infrastructure supports all of these architectures and can test all software from deep-learning recognition systems to low-layer control systems.
We divide our packages into smaller pieces in order to maintain the high degree of component nature of the packages we develop. This reduces unnecessary build targets and greatly reduces CI time (tests that originally took 30 minutes or more are now only 3 minutes). However, since there are more than 40 packages under our control alone, it is impossible to keep track of their development and testing status without tools. Therefore, we built a system that calls the API of GitHub and automatically creates a dashboard in Fig.26. All of these deliverables are also open-source software. We also developed GitHub Actions to configure GitHub Actions to unify CI procedures for multiple repositories and built a system to synchronize CI procedures at all times via bot accounts. [13]
GitHub Actions is also used when deploying to the actual machine. GitHub Actions has a function called Self-Hosted Runner, which allows you to remotely execute any command using your own computer. Using this function, after the CI of the ROS2 package is completed, a configuration file with all commit hashes fixed is created and deployed to the actual device using Github Actions/ansible based on that configuration file. This allows our team to deploy the verified and up-to-date source code to the machine at the push of a button.
In addition, machine learning is a task that requires a variety of complicated labor and a machine with a high-performance GPU. However, most tasks are routine and human intervention is not productive. To give some concrete examples, the following tasks can be automated.
* _Conversion of dataset formats_ : We use labelme for annotation tools, but YOLOX use coco format for learning.
* _Performing training_ : Virtualize the environment with nvidia docker and run scripts for learning on it
* _Visualization and versioning of training results_ : We can visualize inference result via Twitter bot(Fig.[17]) and tensorbord.dev(Fig.27)
* _Model Conversion_ : Need to convert PyTorch training results into a tensorrt model for faster inference.
All tasks are executed using the Self-hosted runner in GitHub actions, which automatically executes the training when it detects additional events in the machine learning data set. Training takes about 30 minutes when we train YOLOX-S model. Future work includes building a mechanism to automatically deploy the obtained learning results to the actual machine and Neural Architecture Search using black-box optimization.
### _Communication with Technical Director Server_
We must report the vehicle's status as the Autonomous Maritime System heartbeat to a Technical Director server via a TCP connection. Our team usually tests navigation systems using a simulation environment, so we should check the heartbeat sentence on the simulation loop. To achieve it, we built a simple mock server[14]. It displays received NMEA-like sentences from a simulation via a TCP connection. We
Fig. 26: CI/CD Dashboard
Fig. 27: Training Result in Tensorord.dev
Fig. 25: CI/CD Pipeline
also implemented a ROS2 node that collects information about the aircraft and sends heartbeats.
### _WAM-V Controller_
The WAM-V control system adopted the ros2_control[15, 16]. The ros2_control is a framework for real-time control of robots using ROS2. Our WAM-V controllers are listed as follows.
#### Iv-I1 Equation of Motion
Our WAM-V equation of motion is calculated as follows.
\[M\dot{\nu}=-D(\nu)\nu+\tau \tag{4}\]
\[\nu=\begin{bmatrix}u&v&\omega\end{bmatrix}^{T} \tag{5}\]
\[\tau=\begin{bmatrix}f_{x}&f_{y}&f_{yaw}\end{bmatrix}^{T} \tag{6}\]
\[M=\begin{bmatrix}m+m_{x}&0&0\\ 0&m+m_{y}&0\\ 0&0&I_{z}+J_{z}\end{bmatrix} \tag{7}\]
\[D(\nu)=\begin{bmatrix}0&0&-mu\\ 0&0&mv\\ mu&-mu&0\end{bmatrix} \tag{8}\]
where \(\nu\) is WAM-V velocity in WAM-V coordinate system. \(\tau\) is WAM-V coordinate system input. \(M\) is the mass matrix. \(m\) is the WAM-V mass. \(m_{x}\) and \(m_{y}\) are the added mass of WAM-V in X and Y directions. \(I_{z}\) is the WAM-V moment of inertia. \(J_{z}\) is the WAM-V added moment of inertia. \(D(\nu)\) is the WAM-V resistance coefficient matrix.
#### Iv-I2 Differential Thruster Model
\[\tau = \begin{bmatrix}f_{x}\\ f_{y}\\ f_{yaw}\end{bmatrix} \tag{9}\] \[= \begin{bmatrix}\dfrac{1}{2}(F_{r}+F_{l})\\ 0\\ \dfrac{w}{2}(F_{r}-F_{l})\end{bmatrix}\]
where \(Fr\) and \(F_{l}\) are propulsion of right and left side thruster. \(w\) is the distance between right and left thrusters.
#### Iv-I3 Propeller Model
\[T = \rho n^{2}D^{4}K_{t}(J_{s}) \tag{10}\] \[K_{t}(J_{s}) = k_{2}J_{s}^{2}+k_{1}J_{s}+k_{0}\] (11) \[J_{s} = \dfrac{up}{nD} \tag{12}\]
where, \(T\), \(\rho\), and \(D\) are the propeller thrust, fluid density, and propeller radius. \(k_{0}\), \(k_{1}\) and \(k_{2}\) are constants values. \(up\) and \(n\) are inflow rate and rotation speed.
## V Firmware
We use the same microcontroller for two main purposes. The reason for using the same microcontroller is to make it easier to have a spare in case of failure during the competition. We considered several types of microcontrollers and finally adopted NUCLEO-F767ZI (Fig.28) from STM Corp.
The two roles of the microcontroller are to drive the motor and monitor the supply voltage. In order to meet the specifications for designing a real-time guaranteed control system for the motor drive, we implemented firmware that connects ROS2 through packet communication with the speed control system and UDP communication, which were specified in the previous chapter. For the other role of monitoring the power supply voltage, a real-time guarantee was not necessary. Instead, it was necessary to be able to easily communicate with ROS2 via Pub/Sub, so mROS2, an embedded communication library compatible with ROS2, was adopted to realize communication between the lower layers and higher layers using the ROS2 system.
## VI Conclusion
In this paper, OUXT-Polaris reported the development of the autonomous navigation system for the 2022 RobotX Challenge. Based on the results of the 2018 RobotX Challenge, OUXT-Polaris rebuilt the system and developed improved systems for the Maritime RobotX Challenge 2022. We succeeded in constructing the highly reusable system by designing systems with high independence as parts, in addition to high computing capacity and environmental recognition capability.
Fig. 28: NUCLEO-F767ZI Microcontroller
Moreover, we described the developing method in Covid-19 and the feature components for the next RobotX Challenge. We hope these significant upgrades will produce positive results in the next competition.
| OUXT-Polaris は、2014 年、2016 年、2018 年の MARITIME ROBOTX チャレンジに協力することで自律航行システムを開発してきました。この論文では、過去の船舶システムの改良について述べ、改良された設計の利点を示します。さらに、Covid-19 の下での開発方法について説明し、シミュレーション / Miniature-size Hardwareと次の RobotX チャレンジのための機能構成要素も記述しています。 |
2308.05582 | Usability Assessment of the OnlyKey Hardware Two-Factor Authentication
Key Among Low Vision or Blind Users | Hardware security keys undoubtedly have advantage for users as "usability"
pain is trivial compared to the maximum "security" gain in authentication.
Naturally, the hardware factor in the authentication received a widespread
adoption amongst average users, as it is ergonomically less demanding than
phone texts or authentication prompts. This ergonomic advantage in particular
is essential for users who are blind or low vision, as their interaction with a
phone is impractical. However, the "usability" for low vision or blind users
pain might be much higher than an average well-bodied user for the same
"security" gain. In an effort to learn more we conducted a usability assessment
with ten low vision or blind users setting up the OnlyKey two-factor
authentication key. First, the setup process was insurmountable for more than
half of the participants, resulting in a situation where the hardware key was
abandoned. Secondly, the lack of tactile orientation led participants to
consider it as both impractical, and prone to difficulties locating or loosing
it. We discuss the implications of our findings for future improvements in
usable authentication for visually impaired users. | Aziz Zeidieh, Filipo Sharevski | 2023-08-10T13:46:36 | http://arxiv.org/abs/2308.05582v1 | Usability Assessment of the OnlyKey Hardware Two-Factor Authentication Key Among Low Vision or Blind Users
###### Abstract
Hardware security keys undoubtedly have advantage for users as "usability" pain is trivial compared to the maximum "security" gain in authentication. Naturally, the hardware factor in the authentication received a widespread adoption amongst average users, as it is ergonomically less demanding than phone texts or authentication prompts. This ergonomic advantage in particular is essential for users who are blind or low vision, as their interaction with a phone is impractical. However, the "usability" for low vision or blind users" pain might be much higher than an average well-bodied user for the same "security" gain. In an effort to learn more we conducted a usability assessment with ten low vision or blind users setting up the OnlyKey two-factor authentication key. First, the setup process was insurmountable for more than half of the participants, resulting in a situation where the hardware key was abandoned. Secondly, the lack of tactile orientation led participants to consider it as both impractical, and prone to difficulties locating or loosing it. We discuss the implications of our findings for future improvements in usable authentication for visually impaired users.
## 1 Introduction
Past research in the area of hardware security key usability [6, 10] has primarily centered around non-disabled users. Narrowing down the scope of research to people who are low vision or blind, the available work on the topic of hardware security key accessibility and usability is quite paltry. This is surprising, given the existence of frameworks and agendas for designing inclusive security and privacy mechanisms [13, 14]. Barbosa et al. designed a password manager application for people who are low vision and blind [4]. Azenkot et al. developed PassChods, a visual authentication method for touch surfaces that is robust to aural and visual eavesdropping [3]. Another work along this lines is BraillePassword by Alinfai et al., a web based authentication mechanism for blind users [2]. These works are important in addressing the needs of the visually impaired users but none address the case of second factor of authentication when it comes to usability of external hardware keys.
## 2 Research Study
This research evaluates the accessibility and usability of OnlyKey, a hardware security key that offers multi-factor authentication, and password management functionality show in in Figure 1. OnlyKey is a USB device with the form factor of a common flash drive. It comes fitted in removable silicon case. On one side, OnlyKey has six solid state buttons, numbered one thru six resembling a six dot braille cell, and on the reverse side there is an LED indicator light. While the arrangement of the physical solid state buttons is identical to a six dot braille cell, the numbering does not conform to the braille cell dot numbers. The male USB A plug on OnlyKey has the PCB exposed instead of encasing it in a metal structure as commonly seen in USB devices outside of the hardware security industry. The top edge of OnlyKey has a key ring hole, and comes with a quick release connector that can be used to connect OnlyKey to the user's keys.
### Sampling
Our research will exclusively focus on the accessibility and usability of the OnlyKey hardware security key, by users who are low vision or blind. OnlyKey is a hardware security key that offers password manager and multi factor authentication functionality, similar to the YubiKey by Yubico [5, 9, 15]. We got approval from our Institutional Review Board (IRB)
to conduct semi-structured interviews with a sample of ten visually impaired users from the United States. All participants were 18 years or older. We used snowball sampling technique as the initial couple of participants were asked to recommend another participant with visual impairments for the study. The interviews were conducted over Zoom, as in-person presence was restricted by the university policy. The interviews lasted between 60 and 90 minutes, and users were shipped the OnlyKey hardware key beforehand. Compensation for participation in the research study was an OnlyKey hardware security key and a $25 Amazon gift card.
### Data Collection
The data collection for this research was primarily made up of two parts. The first part was the hands-on portion of the interview, where participants were directed to familiarize themselves with the OnlyKey, and set it up using the OnlyKey Quick Setup option. The second part of the interview script consisted of thirteen questions, ten were Likert scale questions adapted from the Accessible Usability Scale (AUS) [12], whereas the final three questions, were open-ended questions, designed to gauge participants attitude toward the accessibility and usability of OnlyKey, given their experience with it in the hands-on portion of the interview.
## 3 Results
Ten people who are low vision or blind agreed to participate in this research study as interviewees. Participants will be herein referred to as **P1** thru **P10**. Eight of the participants identified as male while two identified as female. Three participants identified themselves as having low vision with limited visual acuity (B3), two participants identified as being totally blind and cannot see any lights or shapes (B1), two as being blind and having visual perception to see only lights and shapes (B2), two participants who stated they had low vision with high visual acuity (B3), and one participant identified as being low vision but not legally blind (B4). For the remainder of this paper, participants will be categorized into one of four visual classifications - B1 through B4 - based off of their response to the visual perception question in the demographics section of the interview. These four classifications are used by the United States Association of Blind Athletes (USABA) [11].
Participants' responses to the AUS were calculated and included in Table 3 next to their visual classification level. For the positively worded statements (questions 1, 3, 5, 7, and 9), the score contribution is identified by taking the scale position and subtracting 1. Then, multiplying the resulting number by 2.5. For the negatively worded statements (questions 2, 4, 6, 8, and 10), the score contribution is 5 minus the scale position. Then, multiplying the resulting number by 2.5. [12]. The minimum possible AUS score is 0 while the maximum possible AUS score is 100. The average AUS score across all research participants shown in the table above is 46.5.
### Initial Observations
Research participants were asked to take time to familiarize themselves with the OnlyKey. **P6**, **P7**, **P8** and **P10** described the OnlyKey keypad as resembling a Braille cell. Shortly thereafter, all participants who's visual perception fell into the B1, B2 and B3 categories, excluding **P10**, expressed concern regarding the layout of the OnlyKey keypad, having no knowledge of which buttons are associated with which numbers. After reading the OnlyKey user manual, participant **P5** asked "wait, which button is which though?" Participants were also thrown off by the atypical USB A plug on OnlyKey. While this style of USB A plug is common with security keys like the YubiKey, all participants in this research had no prior experience with any security keys. Some participants would later go on to plug the OnlyKey into the computer upside down. **P4** initially identified the six buttons on OnlyKey and referred to them as a "netting woven into the device." **P4**, **P5** and **P7** used an app called _Be My Eyes_[1].
**P10** did not use assistive technologies during the entire span of the research study. They were unsure if the keys on
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Participant** & **Visual Classification** & **AUS Score** \\ \hline P1 & B3 & 47.5 \\ \hline P2 & B3 & 52.5 \\ \hline P3 & B4 & 37.5 \\ \hline P4 & B3 & 35 \\ \hline P5 & B2 & 37.5 \\ \hline P6 & B2 & 37.5 \\ \hline P7 & B1 & 47.5 \\ \hline P8 & B3 & 65 \\ \hline P9 & B1 & 55 \\ \hline P10 & B3 & 50 \\ \hline \end{tabular}
\end{table}
Table 1: AUS and Visual Classification of the Sample
Figure 1: An image of the OnlyKey hardware security key
OnlyKey were "buttons" but they stated "I guess that will be something I'll find out in setup?" **P6** employed the assistance of their sighted relative to familiarize themselves with the OnlyKey keypad layout. They asked their relative what the orientation of the keypad was, and where each number key was. **P5**, **P7** and **P8** attempted to locate a written description of the OnlyKey layout in the OnlyKey manual and on Google, they were unable to find anything to help address this inquiry.
### OnlyKey Quick Setup
**P4**, **P5**, **P6**, **P7** and **P8** initially found and attempted to watch the official OnlyKey video on how to set up the security key [8]. However, the video lacked narration nor spoken feedback, only text instructions, visual graphics, and a background instrumental which was of no use to the participants. **P3** and **P10** had enough usable vision to read the text on the package and successfully scan the QR code which would take them to the OnlyKey "start" page. **P1**, **P2**, **P4**, **P5**, **P6**, **P7** and **P8** used an app called _Seeing AI_, to read the text off of the OnlyKey packaging. _Seeing AI_ is an app by Microsoft that enables people who are low vision or blind to see the world around them with artificial intelligence. [7]. A photo of the OnlyKey retail packaging is in the appendix.
While **P1**, **P2**, and **P4** were able to successfully get the URL of the OnlyKey online start page, **P5**, **P6**, and **P7** struggled with _Seeing AI_ to get this information, abandoning the packaging as a source of information. **P8** had some usable vision and noticed the QR code on the back of the OnlyKey package and attempted to use the product identification function of _Seeing AI_ to scan the QR code. _Seeing AI_ does not recognize QR codes in the product identification mode.
Those who did not rely on the packaging as a source of information used Google or Bing. All participants ultimately found the correct instructions to follow for the OnlyKey Quick Setup process. **P1**, **P3**, **P5** and **P10** were the only participants who were able to successfully set up the OnlyKey and know the PIN configured on the device after the setup process. **P2**, **P4**, **P6**, **P7**, **P8** and **P9** were technically able to set up the OnlyKey, however, they did not know the PIN that was configured on the OnlyKey. This resulted in them being locked out of OnlyKey requiring them to reset it to factory defaults after the interview.
The OnlyKey Quick Setup method requires that a user plug the OnlyKey into their computer, open up a text editor, (NotePad on Windows or TextEdit on macOS), then press and hold the 3 key on the OnlyKey keypad for five or more seconds then release. With the OnlyKey acting as a keyboard connected to the computer, it types out text at the point of the cursor. At this point of this process, the text "you have 20 seconds" is printed out referring to the time the user has to choose between a custom PIN, or a randomly generated PIN, with randomly generated being the default if no response is recognized after 20 seconds.
**P2**, **P4**, **P6**, **P7**, **P8** and **P9** all used a screen reader. This is important because those who tried interacting with their computer while the OnlyKey setup process was in progress shifted the focus of the cursor resulting in text being typed by OnlyKey to go elsewhere. For example, **P4** could hear their screen reader echoing out typed text as the OnlyKey printed out the instructions, however, the printed text was intelligible given the speed at which the OnlyKey was typing. **P4** tried to read through the typed out text with their screen reader by using the arrow keys to navigate the text, which also shifted the position of the cursor while the OnlyKey was still printing out instructions. This resulted in the OnlyKey instructions being printed out of order and in a jumble which would ultimately lead to the OnlyKey getting setup with the PIN in the text file that the participant was unable to discern.
**P6** and **P7** were pressing modifier keys associated with functions of the screen reader to read through the document, while OnlyKey was still printing out text. This resulted in shifting the focus of the cursor outside of the text document all together. In **P6**'s case, the focus of the cursor shifted outside the text editor which resulted in output from OnlyKey being entered in other open applications. **P10** was able to see the screen and did not interact with the computer while OnlyKey went through the Quick Setup process of typing out text in NotePad. While the OnlyKey was typing out text, **P10** was reading through the manual and missed the 20 second prompt asking if they want to choose a custom PIN or have one randomly generated. The OnlyKey by default opted to generate random PINs, printed them out in the text document, and finalized the setup process while P10 was reading the user manual.
**P4**, **P6**, **P7** and **P8** realized that the setup had not gone as planned and deleted whatever text was in the text document, unplugged the OnlyKey, plugged it back in, and proceeded with the OnlyKey Quick Setup steps once more. They were confused as to why this did not work again as planned expressing frustration with **P6** saying: "what! It's not doing it anymore! I'm doing the same thing!" At this point, the participant's OnlyKey was setup and configured with a PIN which means the OnlyKey Quick Setup would not be available anymore. After participants exhibited signs of frustration and stress over this process, the researcher intervened to notify the participant that the OnlyKey had been setup at this point, and explained how that came to be. **P1**, **P3** and **P10** were able to set up the OnlyKey following the OnlyKey Quick Setup instructions outlined in the manual. **P5** assumed that only one key on the OnlyKey keypad would result in text output as part of the OnlyKey Quick setup process, so their initial approach was to press and hold random keys for five or more seconds until they got the expected result. In **P5**'s efforts of finding the 3 key, they eventually were able to get the OnlyKey to output instructions and PINs. However, instead of the 3 key, **P5** had randomly chose the 2 key. Pressing and holding the 2 key for five or more seconds then releasing results in OnlyKey going
through the Quick Setup process with "random PINs generation" as the behavior instead of offering the prompt during what would have been the traditional Quick Setup through the 3 key.
After **P5** reviewed the content in the text document, they were able to get the PIN they needed to unlock the OnlyKey, but at this point, **P5** still did not know the exact layout of the OnlyKey keypad, and this is when they called a sighted volunteer through _Be My Eyes_ to ask for a verbal description and orientation of the OnlyKey keypad [1]. At this point, all participants, regardless of ability to set up OnlyKey, were debriefed prior to proceeding to the post-experiment survey. All participants were made aware of the alternative method of setting up OnlyKey, which involved an application. Participants who were unable to set OnlyKey up were provided details on what issues they encountered.
### Post Experiment
As part of the post-experiment survey, participants were asked three open-ended questions about the OnlyKey. The first question asked what they like about OnlyKey from an accessibility standpoint, while the second asked what they disliked. The third and final open-ended question asked for any questions, comments, concerns, or complaints the participant may have had regarding OnlyKey, both for accessibility and in general.
Participants expressed interest in what the OnlyKey had to offer in terms of features and functionality. OnlyKey's familiar form factor was a point brought up by some participants as a positive aspect. Its also important to note that the form factor and design of OnlyKey was determined to be a negative aspect of the device by other participants. Participant attitudes towards the OnlyKey throughout the hands-on experiment became almost predictable after the researchers completed a few interviews with prior subjects. After participants were introduced to the OnlyKey, they expressed a sense of excitement and curiosity, however, as the hands-on experiment progressed, their excitement dwindled, and their curiosity would morph into frustration and confusion.
A primary aspect of the OnlyKey that caused this predictable transformation mid-interview can be attributed to the physical design of the OnlyKey, more specifically, the absence of device feedback that can be interpreted by someone who is low vision or blind, similar to tactile or auditory feedback. Since OnlyKey has solid state buttons, the only feedback is visual, through the single LED indicator light on the back of OnlyKey.
Participants noted this flaw in their responses to open-ended questionswith **P3** saying "I don't like the setup process. It would be nice if there was non-visual feedback when clicking the buttons, not just the light." **P10** was able to eloquently summarize the majority of complaints brought up by prior participants in their response "I disliked that the numbers did not correspond with the braille layout. It took me a moment to realize the buttons were not 'clicky" buttons, and I did not like how it only gave me 20 seconds."
Another notable complaint shared by participants was the lack of detail in the instructions for a user who is low vision or blind. The instructions provided no verbal description of the OnlyKey's keypad layout, and the official OnlyKey instructional videos had no spoken feedback, only visuals and instrumental.
All participant responses to the open-ended questions can be found in the appendix. Qualitative findings for this research study were analyzed manually by the researchers. Code books were not used in the analysis of these findings.
## 4 Discussion and Conclusion
The objective of this study was to evaluate the accessibility of setting up the OnlyKey by people who are low vision or blind. A majority of participants were unsuccessful in the setup of OnlyKey with 60% of participants ultimately being unable to achieve this task. Of the four participants who were able to set the OnlyKey up successfully, three had usable vision, and the fourth only had perception of light. The participant who was blind and can only see lights and shapes relied on the help of a sighted volunteer from _Be My Eyes_[1]. Our usability assessment of OnlyKey as a Current-Of-The-Shelf (COTS) hardware authentication key strongly indicates that the design fails to be inclusive for the usability of the visually impaired users. This is problematic as these users are deprived of the opportunity to benefit from most if not all functionalities provided by OnlyKey. No hardware security key on the market as of the time of this writing offers all the functionality and versatility that OnlyKey offers, which forces this population to compromise maximum security and opt for an inferior security key.
We acknowledge that we had a limited number of participants in this hands-on study in comparison with the general population of individuals who are low vision or blind. We also are aware that the 60 to 90 minutes should have not been enough for the participants to familiarize with the OnlyKey properly. Another notable limitation was the choice of setup option of the OnlyKey. This research evaluated the accessibility of OnlyKey using the OnlyKey Quick Setup option and did not explore the OnlyKey desktop software. Findings gathered by this research were not interpreted based on the technical aptitude of participants. Result interpretation was focused on participant's visual perception without regard for age, gender, education, prior knowledge of hardware security keys, or proficiency with computers.
## Acknowledgements
A special thank you to Amy Gabre, Jackie Jackson, Jeni Shaum, Sue Dalton, and Wendy Brusich. | ハードウェアセキュリティキーは、ユーザーにとって必ず利点であり、その利点は「使いやすさ」の面で「セキュリティ」の最大 gains への比較を考えると、非常に重要です。自然な流れで、認証にハードウェア要素が普及した理由は、平均的なユーザーにとって使いやすいためです。これは、電話テキストや認証プロンプトよりも、使いやすいためです。特に、視覚障害者にとって、この使いやすさは非常に重要です。しかし、視覚障害者や視覚的障壁のあるユーザーにとって、同じ「セキュリティ」の利得を得るのに「使いやすさ」の損失は、平均的な健康なユーザーよりも大きく、それが問題となる可能性があります。この課題を解決するために、視覚障害者を含む10人のユーザーに、OnlyKeyの2段階認証キーをセットアップしました。その結果、半数の参加者はセットアッププロセスに苦戦し、ハードウェアキーが放棄されました。さらに |
2305.14315 | Estimating a multivariate Lévy density based on discrete observations | Existing results for the estimation of the L\'evy measure are mostly limited
to the onedimensional setting. We apply the spectral method to multidimensional
L\'evy processes in order to construct a nonparametric estimator for the
multivariate jump distribution. We prove convergence rates for the uniform
estimation error under both a low- and a high-frequency observation regime. The
method is robust to various dependence structures. Along the way, we present a
uniform risk bound for the multivariate empirical characteristic function and
its partial derivatives. The method is illustrated with simulation examples. | Maximilian F. Steffen | 2023-05-23T17:51:00 | http://arxiv.org/abs/2305.14315v1 | # Estimating a multivariate Levy density
###### Abstract
Existing results for the estimation of the Levy measure are mostly limited to the onedimensional setting. We apply the spectral method to multidimensional Levy processes in order to construct a nonparametric estimator for the multivariate jump distribution. We prove convergence rates for the uniform estimation error under both a low- and a high-frequency observation regime. The method is robust to various dependence structures. Along the way, we present a uniform risk bound for the multivariate empirical characteristic function and its partial derivatives. The method is illustrated with simulation examples.
**Keywords:** Levy processes, jump processes, multivariate density estimation, spectral methods,
low-frequency, high-frequency
**MSC 2020:** 62G05, 62G07, 62M15, 60G51
## 1 Introduction
Levy processes are a staple to model continuous-time phenomena involving jumps, for instance in physics and finance, see Woyczynski (2001) and Cont & Tankov (2004), respectively. Naturally, manifold such applications call for multivariate processes which significantly complicates the theoretical analysis compared to the onedimensional case. Matters worsen as practitioners often only have time-discrete data at their disposal which obstructs the identification of the jumps and hence the calibration of such models. Statistical results in this setting are typically limited to the onedimensional case or omit the estimation of the jump distribution, despite the practical relevance. In the present work, we study exactly this problem: estimating the jump distribution of a multivariate Levy process based on discrete observations.
On a theoretical level, the distribution of the Levy process is uniquely determined by its characteristic triplet, that is, the volatility-matrix, the drift and the Levy measure. The latter characterizes the jump distribution we are interested in. From a statistical point of view, the estimation of the Levy measure is most challenging as we are faced with a nonparametric problem.
The literature commonly distinguishes the following two observation regimes for a Levy process observed at equidistant time points \(0<\delta,2\delta,\ldots,n\delta\rightleftharpoons T\): Under the low-frequency regime, \(\delta\) is fixed as \(n\to\infty\), whereas \(\delta\searrow 0\) under the high-frequency regime. Our estimation method is robust across sampling frequencies.
Motivated by the clearer separation between the jumps themselves and the Gaussian component as \(\delta\searrow 0\) under the high-frequency regime, threshold-based estimators have been applied extensively. Beyond the overview given by Ait-Sahalia & Jacod (2012), Duval & Mariucci (2021) apply such an approach to the estimation of the Levy measure, Gegler & Stadtmuller (2010) study the estimation of the entire Levy triplet and Mies (2020) estimates the Blumenthal-Getoor index. However, these references are restricted to the onedimensional case and multidimensional extensions seem tedious due
to the multitude of directions in which the process can jump. A notable exception is the work by Bucher and Vetter (2013), who estimate the tail-integrals of a multivariate Levy process.
Under the low-freqency regime, we cannot identify the intermittent jumps even in the absence of a Gaussian component resulting in an ill-posed inverse problem, see Neumann and Reiss (2009). A popular way out is the spectral method, see Belomestny and Reiss (2015), which leverages the relationship of the Levy triplet with the characteristic function of the process at any time point. Turning the observations of the process into increments, this characteristic function is estimated and then used to draw inference on parts of the Levy triplet. The method was first considered by Belomestny and Reiss (2006) in the context of exponential Levy models and has since been studied extensively, see Belomestny (2010), Gugushvili (2012), Nickl and Reiss (2012), Reiss (2013), Trabs (2015).
We adapt this approach to the multivariate setting by constructing a nonparametric estimator for the Levy density \(\nu\), assuming that it exists. Our estimator requires no knowledge of the volatility and drift parameters and works uniformly over fully nonparametric classes of Levy processes with mild assumptions. In particular, Levy processes with infinite jump activity are allowed. The uniform rates we achieve naturally extend those from the onedimsional case and optimality in our setting is discussed.
When estimating the Levy density close to the origin, we enhance our method with an estimator for the volatility. The estimation of the volatility matrix itself has previously been studied, see Papagiannouli (2020, high-frequency) and Belomestny and Trabs (2018, low-frequency). A related issue is the estimation of the covariance matrix in deconvolution problems, see Belomestny et al. (2019). However, even the proven minimax-optimal rates of convergence are too slow as to not affect our overall rates under the low-frequency regime. It is sufficient to estimate the trace of the volatility matrix and we show that this can be done with a much faster rate. With this enhancement, there is no additional loss in the rate for the estimation of the Levy density.
An effect emerging for multivariate processes is the possibility of different dependence structures between the components which can be in disagreement with the existence of a Levy density in the form of a Lebesgue density on the whole state space. Statistical results in such settings are even rarer. Belomestny (2011) estimates the Levy density for a time changed Levy process with independent components. We propose a quantification of the estimation error when integrating against regular test functions under various forms of dependence structures without modifications to our method.
The paper is organized as follows. In Section 2, we introduce the estimation method and state our main results along with a short outline of the proof and the key tools used. The empirical performance of our estimator is illustrated in a simulation examples in Section 3. The full proofs are postponed to Section 4.
## 2 Estimation method and main results
We begin by introducing some notation: Throughout, an \(\mathbb{R}^{d}\)-valued Levy process \((L_{t})_{t\geqslant 0}\) with Levy measure \(\nu\) is observed in the form of of \(n\in\mathbb{N}\) increments at equidistant time points with time difference \(\delta>0\) and overall time horizon \(T\coloneqq n\delta\):
\[Y_{k}\coloneqq L_{\delta k}-L_{\delta(k-1)},\qquad k=1,\ldots,n.\]
For \(x,y\in\mathbb{C}^{d}\) and \(p\geqslant 1\), set \(|x|_{p}\coloneqq\big{(}\sum_{k=1}^{d}|x_{k}|^{p}\big{)}^{1/p}\), \(|x|\coloneqq|x|_{2}\), \(|x|_{\infty}\coloneqq\max_{k=1,\ldots,d}|x_{k}|\), \(x\cdot y\coloneqq\langle x,y\rangle\coloneqq\sum_{k=1}^{d}x_{k}y_{k}\) and \(x^{2}\coloneqq x\cdot x\). For a multi-index \(\beta\in\mathbb{N}_{0}^{d}\), we set \(x^{\beta}\coloneqq\prod_{k=1}^{d}x_{k}^{\beta_{k}}\), \(|x|^{\beta}\coloneqq\prod_{k=1}^{d}|x_{k}|^{\beta_{k}}\).
If \(\int|x|^{2}\,\nu(\mathrm{d}x)<\infty\), then the characteristic function of \(L_{t}\) is given by
\[\varphi_{t}(u)\coloneqq\mathbb{E}[\mathrm{e}^{\mathrm{i}(u,L_{t})}]=\mathrm{ e}^{t\psi(u)}\qquad\text{with}\qquad\psi(u)\coloneqq\mathrm{i}\langle\gamma,u \rangle-\frac{1}{2}\langle u,\Sigma u\rangle+\int\left(\mathrm{e}^{\mathrm{i} (u,x)}-1-\mathrm{i}\langle u,x\rangle\right)\nu(\mathrm{d}x)\]
for some drift parameter \(\gamma\in\mathbb{R}^{d}\) and some positive semidefinite volatility matrix \(\Sigma\in\mathbb{R}^{d\times d}\), see Sato (1999). Denoting by \(\Delta\) the Laplace operator, i.e.
\[\Delta g\coloneqq\sum_{k=1}^{d}\frac{\partial^{2}g}{\partial u_{k}^{2}}\]
for a function \(g\colon\mathbb{R}^{d}\to\mathbb{C}\) which is twice differentiable in every direction, we have
\[\nabla\psi(u) =\mathrm{i}\gamma-\Sigma u+\mathrm{i}\int x\big{(}\mathrm{e}^{i\langle u,x\rangle}-1\big{)}\,\nu(\mathrm{d}x), \tag{1}\] \[\Delta\psi(u) =-\operatorname{tr}(\Sigma)-\int|x|^{2}\mathrm{e}^{i\langle u,x \rangle}\,\nu(\mathrm{d}x)=-\operatorname{tr}(\Sigma)-\mathcal{F}[|x|^{2}\nu]( u)=\frac{\varphi_{t}(u)\Delta\varphi_{t}(u)-(\nabla\varphi_{t}(u))^{2}}{t\varphi_{t}^{2} (u)}, \tag{2}\]
where the integral in the first line is component-wise and \(\mathcal{F}[|x|^{2}\nu]\coloneqq\int\mathrm{e}^{\mathrm{i}\langle\cdot,x \rangle}|x|^{2}\,\nu(\mathrm{d}x)\).
To motivate our estimator for \(\nu\), suppose \(\Sigma=0\). In view of (2), we then have \(\nu=-|\cdot|^{-2}\mathcal{F}^{-1}[\Delta\psi]\) and \(\Delta\psi\) can naturally be estimated using the empirical characteristic function \(\widehat{\varphi}_{\delta,n}(u)\coloneqq\frac{1}{n}\sum_{k=1}^{n}\mathrm{e}^{ \mathrm{i}\langle u,Y_{k}\rangle}\) leading to
\[\widehat{\Delta\psi_{n}}(u)\coloneqq\frac{\widehat{\varphi}_{\delta,n}(u) \Delta\widehat{\varphi}_{\delta,n}(u)-(\nabla\widehat{\varphi}_{\delta,n}(u)) ^{2}}{\delta\widehat{\varphi}_{\delta,n}(u)}\mathbbm{1}_{\{|\widehat{\varphi} _{\delta,n}(u)|\geqslant T^{-1/2}\}} \tag{3}\]
with the indicator ensuring a well-defined expression. Therefore, granted \(\nu\) has a Levy density also denoted by \(\nu\), it is reasonable to propose the estimator
\[\widehat{\nu}_{h}(x)\coloneqq-|x|^{-2}\mathcal{F}^{-1}\big{[}\mathcal{F}K_{h }\widehat{\Delta\psi_{n}}\big{]}(x),\qquad x\in\mathbb{R}^{d}\setminus\{0\}, \tag{4}\]
where \(K\) is kernel a limited by bandwidth \(h>0\) (\(K_{h}\coloneqq h^{-d}K(\cdot/h)\)) satisfying for some order \(p\in\mathbb{N}\) that for any multi-index \(0\neq\beta\in\mathbb{N}_{0}^{d}\) with \(|\beta|_{1}\leqslant p\) we have
\[\int_{\mathbb{R}^{d}}K(x)\,\mathrm{d}x=1,\qquad\int x^{\beta}K(x)\,\mathrm{d} x=0\qquad\text{and}\qquad\operatorname{supp}\mathcal{F}K\subseteq[-1,1]^{d}. \tag{5}\]
For \(d=1\), we recover the jump density estimator by Trabs (2015) to estimate quantiles of Levy measures.
A suitable kernel can be constructed as \(K\coloneqq(\mathcal{F}^{-1}g)/g(0)\) from an integrable even function \(g\colon C^{\infty}(\mathbb{R}^{d})\to\mathbb{R}\) with support contained in \([-1,1]^{d}\), \(g(0)\neq 0\) and vanishing mixed partial derivatives of order up to \(p\) at \(0\). For the theoretical analysis, it will be useful to consider a kernel with product structure \(K(x)=\prod_{j=1}^{d}K^{j}(x_{j})\) for kernels \(K^{j}\) on \(\mathbb{R}\), each with order \(p\), i.e. for all \(q\in\mathbb{N}\), \(q\leqslant p\)
\[\int_{\mathbb{R}}K^{j}(x_{j})\,\mathrm{d}x_{j},\qquad\int x_{j}^{q}K^{j}(x_{j} )\,\mathrm{d}x_{j}=0\qquad\text{and}\qquad\operatorname{supp}\mathcal{F}K^{j} \subseteq[-1,1].\]
Obviously, such a product kernel also fulfills (5).
### Convergence rates
To control the estimation error, we need to impose smoothness and moment conditions on the Levy density. To this end, we introduce for a number of moments \(m>0\), a regularity index \(s>0\), an open subset \(U\subseteq\mathbb{R}^{d}\) and a universal constant \(R>0\)
\[\mathcal{C}^{s}(m,U,R)\coloneqq\Big{\{}(\Sigma,\gamma,\nu)\, \Big{|}\,\Sigma\in\mathbb{R}^{d\times d}\text{ positive-semidefinite},\operatorname{tr}(\Sigma)\leqslant R,\,\gamma\in \mathbb{R}^{d},\,\int|x|^{m}\nu(\mathrm{d}x)\leqslant R\\ \nu\text{ has a Lebesgue density with }\||x|^{2}\nu\|_{C^{s}(U)}\leqslant R\Big{\}},\]
where
\[\|f\|_{C^{s}(U)}\coloneqq\sum_{|\beta|_{1}\leqslant\lfloor s\rfloor}\sup_{x \in U}|f^{(\beta)}(x)|+\max_{|\beta|_{1}=\lfloor s\rfloor}\sup_{x,y\in U,x \neq y}\frac{|f^{(\beta)}(x)-f^{(\beta)}(y)|}{|x-y|^{s-\lfloor s\rfloor}}\]
denotes the Holder norm with regularity index \(s\) for any function \(f\) which has derivatives \(f^{(\beta)}\) of order up to \(\lfloor s\rfloor\coloneqq\max\{k\in\mathbb{N}_{0}:k<s\}\) on \(U\). \(C^{s}(U)\) denotes the set of all Holder-regular functions on \(U\) with regularity index \(s>0\). Since we will require regularity of the Levy density in a small \(\zeta\)-neighborhood beyond \(U\) for a uniform rate, we set \(U_{\zeta}\coloneqq\{x\in\mathbb{R}^{d}\mid\exists u\in U:|x-u|<\zeta\}\) for some radius \(\zeta>0\).
In view of (3) it is natural, that the estimation error will also depend on the decay behavior of the characteristic function, which in turn, is affected by the presence of a Gaussian component. Therefore, we distinguish the following two classes of Levy processes. First, is the so-called mildly ill-posed case for a decay exponent \(\alpha>0\)
\[\mathcal{D}^{s}(\alpha,m,U,R,\zeta)\coloneqq\big{\{}(0,\gamma,\nu)\in\mathcal{ C}^{s}(m,U_{\zeta},R)\,\big{|}\,\|(1+|\cdot|_{\infty})^{-\alpha}/\varphi_{1}\|_{ \infty}\leqslant R,\,\|x|\nu\|_{\infty}\leqslant R\big{\}}.\]
As alluded to in the introduction, a Gaussian component overclouds the jumps in addition to the discrete observations and is therefore treated as the severely ill-posed case for \(\alpha,r,\eta>0\)
\[\mathcal{E}^{s}(\alpha,m,U,r,R,\zeta,\eta)\coloneqq\big{\{}(\Sigma,\gamma,\nu )\in\mathcal{C}^{s}(m,U_{\zeta},R)\,\big{|}\,\|\exp(-r|\cdot|_{\infty}^{ \alpha})/\varphi_{1}\|_{\infty}\leqslant R,|x|^{3-\eta}\nu(x)\leqslant R\ \forall|x| \leqslant 1\big{\}}.\]
The parameters \(\alpha\) and \(r\) control the exponential decay of the characteristic funtion. Note that \(\Sigma\neq 0\) already implies \(\alpha=2\).
In the mildly ill-posed case, the Blumenthal-Getoor index of the Levy process is at most \(1\), whereas in the severely ill-posed case it is at most \(\big{(}(3-\eta)\wedge 2\big{)}\lor 0\), where we set \(a\wedge b\coloneqq\min\{a,b\}\) and \(a\lor b\coloneqq\max\{a,b\}\) for \(a,b\in\mathbb{R}\).
For these regularity classes, we are able to quantify the estimation error as follows.
**Theorem 1**.: _Let \(\alpha,r,R,\zeta>0,s>1,m>4\) and let the kernel satisfy (5) with order \(p\geqslant s\). Let \(U\subseteq\mathbb{R}^{d}\) be an open set which is bounded away from \(0\). We have for \(0<\delta\leqslant R\), \(n\to\infty\):_
1. _If_ \(U\) _is bounded and_ \(h=h_{\delta,n}=(\log(T)/T)^{1/(2s+2\delta\alpha+d)}\)_, then uniformly in_ \((\Sigma,\gamma,\nu)\in\mathcal{D}^{s}(\alpha,m,U,R,\zeta)\)__ \[\sup_{x^{*}\in U}|\widehat{\nu}_{h}(x^{*})-\nu(x^{*})|=\mathcal{O}_{\mathbb{P} }\Big{(}\Big{(}\frac{\log T}{T}\Big{)}^{s/(2s+2\delta\alpha+d)}\Big{)}.\] _If_ \(\delta=n^{-\varepsilon}\) _with_ \(\varepsilon\in(0,1)\)_, the choice_ \(h=(\log(T)/T)^{1/(2s+d)}\) _yields the rate_ \((\log(T)/T)^{s/(2s+d)}\)_._
2. _If_ \(|\cdot|^{p+d}K\in L^{1}(\mathbb{R}^{d})\)_,_ \(\eta>0\) _and_ \(h=h_{\delta,n}=(\log(T)/(4r\delta))^{-1/\alpha}\)_, then uniformly in_ \((\Sigma,\gamma,\nu)\in\mathcal{E}^{s}(\alpha,m,U,r,R,\zeta,\eta)\)__ \[\sup_{x^{*}\in U}|\widehat{\nu}_{h}(x^{*})-\nu(x^{*})|=\mathcal{O}_{\mathbb{P} }\Big{(}\Big{(}\frac{\log T}{4r\delta}\Big{)}^{-s/\alpha}\Big{)}.\] _If_ \(\delta=n^{-\varepsilon}\) _with_ \(\frac{3}{2(s+d)+1}\sqrt{\frac{\alpha}{2(s+d)+\alpha}}<\varepsilon<1\)_, the choice_ \(h=T^{-1/(2(s+d))}\) _yields the rate_ \(T^{-s/(2(s+d))}\)_._
This theorem generalizes Trabs (2015, Proposition 2) to the multivariate case and additionally allows for high-frequency observations. Figs. 1 and 2 illustrate a simulation example of the estimation method.
In the mildly ill-posed case, one can easily attain the same rates without the logarithm when considering the pointwise loss.
We first discuss the low-frequency regime: For \(d=1\), our rates coincide with the proven minimax-optimal rates in the corresponding nonparametric deconvolution problems, see Fan (1991). In the mildly ill-posed case with \(d=1\), the pointwise variant of our rate has been shown to be minimax-optimal under the assumption that \(x\nu\) is \(s\)-Sobolev-regular, see Kappus (2012). In the severely ill-posed case with \(d=1\) and \(\alpha=\{1,2\}\), our rates coincide with the minimax-optimal rates of Neumann & Reiss (2009), who consider the integrated risk in the estimation of \(\Sigma\delta_{0}(\mathrm{d}x)+|x|^{2}(1+|x|^{2})^{-1}\nu(\mathrm{d}x)\) against test functions with Sobolev regularity \(s\). This measure has an atom in \(0\) and is therefore not smooth. Hence, the regularity in the rate comes purely from the test function. By considering \(U\) bounded away from \(0\), we can profit from the regularity of the Levy density outside the origin. We do not even suffer an additional loss for the dimension in the rate, only in the constant. Therefore, the above suggests its optimality.
One sees that the rates improve as the time grid narrows. If this refinement happens at an appropriate order compared to the growth of the sample, the ill-posedness vanishes completely in the mildly ill-posed case and the rate becomes polynomial in the severely ill-posed case. In the mildly ill-posed case with high-frequency observations, the rate corresponds to the minimax-optimal rate in a nonparametric regression.
It is straightforward to see from our proof, that when estimating \(|\cdot|^{2}\nu\), we can forgo the exclusion of the origin from \(U\) while achieving the same rates in the mildly-ill-posed case. In the severely ill-posed case, the unknown volatility of the Brownian component of the Levy process obstructs the observation of the small jumps. Hence, we can benefit from a pilot estimator for \(\Sigma\). As discussed earlier, even a with minimax-optimal estimator for \(\Sigma\), we would suffer a loss in the overall rate. However, in view of (2), it suffices to estimate the onedimensional parameter \(\operatorname{tr}(\Sigma)\) which is easier compared to the \(d\times d\)-matrix \(\Sigma\). Following the spectral approach again, we propose the estimator
\[\widehat{\operatorname{tr}(\Sigma)}\coloneqq\widehat{\operatorname{tr}( \Sigma)}_{h}\coloneqq-\int W_{h}(u)\widehat{\Delta\psi_{n}}(u)\,\mathrm{d}u.\]
where \(W_{h}=h^{d}W(h\cdot)\) for a bandwidth \(h>0\) (correspoding to the threshold \(h^{-1}\)) and a weight function \(W\colon\mathbb{R}^{d}\to\mathbb{R}\) with
\[\int W(u)\,\mathrm{d}u=1\qquad\text{and}\qquad\operatorname{supp}W\subseteq[-1,1]^{d}.\]
This estimator achieves a rate of \((\log T)^{-(s+d)/\alpha}\) and is incorporated into the estimator for \(|\cdot|^{2}\nu\) via
\[\widehat{|\cdot|^{2}\nu}_{h}\coloneqq-\mathcal{F}^{-1}\big{[}\mathcal{F}K_{h }\big{(}\widehat{\Delta\psi_{n}}+\widehat{\operatorname{tr}(\Sigma)}_{h}\big{)} \big{]}\]
leading to the following extension of Theorem 1.
**Proposition 2**.: _Let \(\alpha,r,R,\zeta,\eta>0,1<s\in\mathbb{N},m>4\) and let the kernel satisfy (5) with order \(p\geqslant s\) as well as \(|\cdot|^{p+d}K\in L^{1}(\mathbb{R}^{d})\). Assume \(\|\mathcal{F}^{-1}[W(x)/x_{k}^{s}]\|_{L^{1}}<\infty\) for some \(k\). Choosing \(h=(\log(T)/(4r\delta))^{-1/\alpha}\) we have uniformly in \((\Sigma,\gamma,\nu)\in\mathcal{E}^{s}(\alpha,m,\mathbb{R}^{d},r,R,\zeta,\eta)\)_
\[\sup_{x^{*}\in\mathbb{R}^{d}}\big{|}\big{(}\widehat{|\cdot|^{2}\nu}_{h}\big{)} (x^{*})-|x^{*}|^{2}\nu(x^{*})\big{|}=\mathcal{O}_{\mathbb{P}}\Big{(}\Big{(} \frac{\log T}{4r\delta}\Big{)}^{-s/\alpha}\Big{)}.\]
Figure 1: 3D plot of \(|\cdot|^{2}\nu\) (left) and its estimate (right) for a twodimensional compound Poisson process with Gaussian jumps.
### Independent components
Compared to the onedimensional case, we need to take the dependence structure of the components of the process into account. In particular, our previous assumption about \(\nu\) having a Lebesgue-density on \(\mathbb{R}^{d}\), rules out Levy processes where all components are independent, since the corresponding Levy measure would only have mass on the coordinate cross. Similarly, Levy processes consisting of multiple mutually independent blocks of components, where the components within the same block depend on each other, are not covered.For the sake of notational simplicity, we focus on the case of two equisized independent blocks: Let \(d\) be even and \(L=(L^{(1)},L^{(2)})\), where \(L^{(1)}\) and \(L^{(2)}\) are two independent Levy processes on \(\mathbb{R}^{d/2}\) with characteristic triplets \((\Sigma_{1},\gamma_{1},\nu_{1})\) and \((\Sigma_{2},\gamma_{2},\nu_{2})\), respectively. Denoting by \(\delta_{0}\) the Dirac-measure in \(0\in\mathbb{R}^{d/2}\), it holds that
\[\nu(\mathrm{d}x)=\nu_{1}(\mathrm{d}x^{(1)})\otimes\delta_{0}(\mathrm{d}x^{(2) })+\delta_{0}(\mathrm{d}x^{(1)})\otimes\nu_{2}(\mathrm{d}x^{(2)})\qquad x=(x^ {(1)},x^{(2)}),\,x^{(1)},x^{(2)}\in\mathbb{R}^{d/2}. \tag{6}\]
We summarize the class of such Levy processes as
\[\widetilde{\mathcal{C}}(m,R)\coloneqq\Big{\{}(\Sigma,\gamma, \nu)\,\Big{|}\,\Sigma=\begin{pmatrix}\Sigma_{1}&0\\ 0&\Sigma_{2}\end{pmatrix},\mathrm{tr}(\Sigma)\leqslant R,\Sigma_{1},\Sigma_{2} \in\mathbb{R}^{d/2\times d/2}\text{ positive-semidefinite},\gamma\in\mathbb{R}^{d},\\ \int|x|^{m}\,\nu(\mathrm{d}x)\leqslant R,\nu\text{ has the form (\ref{eq:Levy}) and }v_{1},\nu_{2}\text{ have Lebesgue- densities}\Big{\}}\]
for \(m,R>0\). A simple example of such a Levy measure and its estimate are illustrated in Fig. 3.
As before, we distinguish between the mildly ill-posed case with \(\alpha>0\)
\[\widetilde{\mathcal{D}}(\alpha,m,R)\coloneqq\big{\{}(0,\gamma,\nu)\in \widetilde{\mathcal{C}}(m,R)\,\big{|}\,\|(1+|\cdot|_{\infty})^{-\alpha}/\varphi _{1}\|_{\infty}\leqslant R,\,\|x_{k}|\nu_{k}\|_{\infty}\leqslant R,k=1,2\big{\}}\]
and the severely ill-posed case with \(\alpha,r,\eta>0\)
\[\widetilde{\mathcal{E}}(\alpha,m,r,R,\eta)\coloneqq\big{\{}(\Sigma,\gamma, \nu)\in\widetilde{\mathcal{C}}(m,R)\,\big{|}\,\|\exp(-r|\cdot|_{\infty}^{ \alpha})/\varphi_{1}\|_{\infty}\leqslant R,|x_{k}|^{3-\eta}\nu_{k}(x_{k}) \leqslant R\;\forall|x_{k}|\leqslant 1,k=1,2\big{\}}\]
based on the decay behavior of the characteristic function and the presence of a Gaussian component.
If this dependence structure were known, we could seperate the blocks in the observations, apply our method to each block and obtain an estimator for the overall Levy measure. Since this is not the case,
Figure 2: Heatmap of \(|\cdot|^{2}\nu\) (left) and its estimate (right) for a twodimensional compound Poisson process with Gaussian jumps.
we are left with applying our initial method. In spite of the unknown dependence structure, we will be able to quantify the estimation error. Due to the structure of the Levy measure, we cannot hope for a pointwise quantitative bound. Instead, we consider the error in a functional sense. To this end, we introduce the following class of test functions for \(\varrho>0\) and \(U\subseteq\mathbb{R}^{d}\)
\[F_{\varrho}(U,R)\coloneqq\{f\colon\mathbb{R}^{d}\to\mathbb{R}\mid f\in C^{ \varrho}(\mathbb{R}^{d}),\|f\|_{C^{\varrho}(\mathbb{R}^{d})},\|f\|_{L^{1}( \mathbb{R}^{d})}\leqslant R,\,\mathrm{supp}\,f\subseteq U\}.\]
**Theorem 3**.: _Let \(\alpha,r,R>0,\varrho>1,m>4\), let the kernel have product structure and satisfy (5) with order \(p\geqslant\varrho\). Then, we have for \(0<\delta\leqslant R,n\to\infty\):_
1. _If_ \(U\subseteq\mathbb{R}^{d}\) _is bounded and_ \(h=(\log(T)/T)^{1/(2\varrho+2\delta\alpha+3d/2)}\)_, then uniformly in_ \((\Sigma,\gamma,\nu)\in\widetilde{\mathcal{D}}(\alpha,m,R)\)__ \[\sup_{f\in F_{\varrho}(U,R)}\Big{|}\int_{U}f(x)|x|^{2}\big{(}\nu(\mathrm{d}x) -\widehat{\nu}_{h}(\mathrm{d}x)\big{)}\Big{|}=\mathcal{O}_{\mathbb{P}}\Big{(} \Big{(}\frac{\log T}{T}\Big{)}^{\varrho/(2\varrho+2\delta\alpha+3d/2)}\Big{)}.\]
2. _If_ \(U\subseteq\mathbb{R}^{d}\) _is bounded away from_ \(0\)_,_ \(|\cdot|^{p+d}K\in L^{1}(\mathbb{R}^{d})\)_,_ \(\eta>0\) _and_ \(h=(\log(T)/(4r\delta))^{-1/\alpha}\)_, then uniformly in_ \((\Sigma,\gamma,\nu)\in\widetilde{\mathcal{E}}(\alpha,m,r,R,\eta)\)__ \[\sup_{f\in F_{\varrho}(U,R)}\Big{|}\int_{U}f(x)|x|^{2}\big{(}\nu(\mathrm{d}x) -\widehat{\nu}_{h}(\mathrm{d}x)\big{)}\Big{|}=\mathcal{O}_{\mathbb{P}}\Big{(} \Big{(}\frac{\log T}{4r\delta}\Big{)}^{-\varrho/\alpha}\Big{)}.\]
Note that the regularity parameter \(\varrho\) in the rates comes from the smoothness of the test functions as compared to the smoothness \(s\) of the Levy measure in Theorem 1. In the severely ill-posed case, the result is analogous to the well-specified. In the mildly ill-posed case, we pay for the dependence structure with an \(d/2\) in the rate. Morally, one can interpret this as the model dimension being \(3d/2\) instead of \(d\).
_Remark 4_.: The product kernel is compatible with any dependence structure of blocks, regardless of their size. For instance, if all components of the process are independent, one gets still gets the analogous result in severely ill-posed case. In the mildly ill-posed case, the dimension appearing in the rate is \(2d-1\) instead of \(3d/2\). Comparing the dependence structures, one finds that the two independent blocks are an in-between case of no independent blocks and fully independent components.
Figure 3: 3D plot of \(|\cdot|^{2}\nu\) (left) and its estimate (right) for a Lévy-process where both components are independent compound Poisson processes with Gaussian jumps.
### A uniform risk-bound for the characteristic function and linearization
A key ingredient in the proofs of our preceeding results is the following moment-uniform risk bound for the multivariate characteristic function and its partial derivatives. It generalizes the existing results in the univariate case (see Kappus & Reiss 2010, Theorem 1) and the multivariate non-uniform case (see Belomestny & Trabs 2018, Proposition A.1).
**Proposition 5**.: _Let \(X_{1},X_{2},\dots\) be \(\mathbb{R}^{d}\)-valued i.i.d. random variables with characteristic function \(\varphi\) and empirical characteristic function \(\widehat{\varphi}_{n}\) such that \(\mathbb{E}[|X_{1}|^{2\beta}|X_{1}|^{\tau}]\lesssim\rho^{|\beta|_{1}\wedge 1}\) and \(\mathbb{E}[|X_{1}|^{2\beta}]\lesssim\rho^{|\beta|_{1}\wedge 1}\) for some multi-index \(\beta\in\mathbb{N}_{0}^{d}\) and \(\tau,\rho>0\). For the inverse softplus-type weight function \(w(u)=\log(\mathrm{e}+|u|)^{-(1+\chi)/2}\) with \(\chi>0\), we have_
\[\mathbb{E}\big{[}\big{\|}w(u)(\widehat{\varphi}_{n}-\varphi)^{(\beta)}(u) \big{\|}_{\infty}\big{]}\lesssim\rho^{(|\beta|_{1}\wedge 1)/2}n^{-1/2}.\]
As a direct consequence of Proposition 5, the indicator in the definition (3) equals one on the support of \(\mathcal{F}K_{h}\), with probability converging to one for the bandwidths we consider.
To prove our rates for \(\widehat{\nu}_{h}\), (2) lets us decompose the error
\[\widehat{\nu}_{h}(x^{*})-\nu(x^{*})\] \[=|x^{*}|^{-2}\big{(}\underbrace{\big{(}K_{h}*(|\cdot|^{2}\nu)-| \cdot|^{2}\nu\big{)}(x^{*})}_{=:B^{\nu}(x^{*})}-\underbrace{\frac{1}{\delta} \mathcal{F}^{-1}\big{[}\mathcal{F}K_{h}\Delta\big{(}(\widehat{\varphi}_{\delta,n}-\varphi_{\delta})/\varphi_{\delta}\big{)}\big{]}(x^{*})}_{=L^{\varphi}_{ \delta,n}(x^{*})}+R_{\delta,n}+\mathrm{tr}(\Sigma)K_{h}(x^{*})\big{)} \tag{7}\]
into a bias term \(B^{\nu}\), the linearized stochastic error \(L^{\nu}_{\delta,n}\), the error \(\mathrm{tr}(\Sigma)K_{h}\) due to the volatility and a remainder term \(R_{\delta,n}\). Proposition 5 applied to the increments of the Levy process leads to the following linearization.
**Lemma 6**.: _Let \(\int|x|^{4+\tau}\,\nu(\mathrm{d}x)\leqslant R\) for some \(\tau>0\). If \(n^{-1/2}(\log h^{-1})^{(1+\chi)/2}\|\varphi_{\delta}^{-1}\|_{L^{\infty}(I_{h})}\to 0\) as \(n\to\infty\) for \(h\in(0,1),\chi>0\), it holds_
\[\sup_{|u|_{\infty}\leqslant h^{-1}}\big{|}\widehat{\Delta\psi_{n} }(u)-\Delta\psi(u)-\delta^{-1}\Delta\big{(}(\widehat{\varphi}_{\delta,n}- \varphi_{\delta})/\varphi_{\delta}\big{)}(u)\big{|}=\mathcal{O}_{\mathbb{P}}(a _{n}),\qquad\text{where}\] \[a_{n}\coloneqq n^{-1}(\log h^{-1})^{1+\chi}\|\varphi_{\delta}^{- 1}\|_{L^{\infty}(I_{h})}^{2}\delta^{-1/2}\big{(}\delta\big{\|}|\nabla\psi| \big{\|}_{L^{\infty}(I_{h})}+\delta^{3/2}\big{\|}|\nabla\psi|\big{\|}_{L^{ \infty}(I_{h})}^{2}+1\big{)}.\]
As a direct consequence, the remainder term is of the order
\[|R_{\delta,n}|=\mathcal{O}_{\mathbb{P}}\big{(}h^{-d}a_{n}\big{)}. \tag{8}\]
After treating the four terms in (7), the asserted rates follow from our bandwidth choices.
## 3 Simulation examples
We demonstrate the estimation of the Levy density for \(d=2\) with three examples: a compound Poisson process, a variance gamma process and two independent compound Poisson processes.
A challenge is to find examples of multivariate Levy processes for which paths can be simulated and the true Levy measure is accessible (at least numerically). To allow for pltable results, we consider the case \(d=2\) and compensate for the possible singularity of the Levy density at the origin, i.e. we plot \(|\cdot|^{2}\nu\) and its estimate. Throughout, we use the flat-top-kernel \(K\), see McMurry & Politis (2004), as defined by its Fourier transform
\[\mathcal{F}K(u)\coloneqq\begin{cases}1,&|u|\leqslant c,\\ \exp\Big{(}-\frac{b\exp(-b/(|u|-c)^{2}}{(|u|-1)^{2}}\Big{)}&c<|u|<1,\\ 0,&|u|\geqslant 1,\end{cases}\]
whose decay behaviour is controlled by \(b>0\) and \(0<c<1\). In our simulations, \(b=1,c=1/50\) deliver stable results. While a product kernel is convenient for theoretical reasons in Section 2.2, it did not seem necessary in practice. Throughout, we simulate increments of the processes with a time difference of \(\delta=0.001\) and fix the bandwidth at \(h=4T^{-1/2}\). To conquer this ill-posed problem, we use large samples of \(n=500,000\) increments. From the definition (4) of the estimator, it is not guaranteed that \(\widehat{\nu}\geqslant 0\) and for numerical reasons even \(\widehat{\nu}\in\mathbb{C}\setminus\mathbb{R}\) is possible in practice. Therefore, we consider the estimator \(\mathrm{Re}(\widehat{\nu})\lor 0\) in our simulations.
The most straightforward example under consideration is the compound Poisson process with intensity \(\lambda=100\) and twodimensional standard-Gaussian jumps. In this case, the Levy density is just the standard normal density, rescaled with the intensity \(\lambda\). Fig. 1 illustrates that the method captures the overall shape of the density. The heatmap in Fig. 2 provides a more detailed view especially around the origin. We observe that the decay for \(|x|\to\infty\) and \(|x|\searrow 0\) is well-estimated, with slight problems only arising on an annulus around the origin.
A practical way to construct easy-to-simulate multivariate Levy processes is to subordinate multivariate Brownian motion. In particular, we use a gamma process with variance \(\kappa=1\) to subordinate a twodimensional standard Brownian motion. To access the Levy measure of the resulting variance gamma process, we approximate the theoretical expression from Cont & Tankov (2004, Theorem 4.2) numerically. The results are again illustrated in a 3D plot (Fig. 4) and as a heatmap ((Fig. 5)). In this example, the estimator suffers from oscillations around the true density which are to be expected from spectral-based methods.
To demonstrate the method under the depencence structure discussed in (Section 2.2), we consider a Levy process comprised of two independent compound Poisson processes, each with intensity \(\lambda=100\) and onedimensional standard-Gaussian jumps. In contrast to the twodimensional compound Poisson process at the considered at the beginnung of this section, the jumps in both components are driven by independent Poisson processes. The corresponding Levy measure takes the form (6), where \(\nu_{1}\) and \(\nu_{2}\) are onedimensional standard-Gaussian densities, rescaled with \(\lambda\), as illustrated on the left hand side of Fig. 3. It is important to emphasize, that the blue and the orange line represent the Lebesgue-densities of both components on \(\mathbb{R}\), not \(\mathbb{R}^{2}\). The right hand side of the aforementioned figure reveals a strong performance of the estimator on the coordinate cross. Around the axes, we observe a smearing effect due to the singularity of the true Levy measure on the coordinate cross before the estimate drops off as we move away.
Figure 4: 3D plot of \(|\cdot|^{2}\nu\) (left) and its estimate (right) for a twodimensional variance gamma process.
## 4 Proofs
Throughout, set \(I_{h}\coloneqq[-h^{-1},h^{-1}]^{d}\) for \(h>0\). Note that \(\Sigma,\nu,\Delta\psi\) and \(\widehat{\Delta\psi_{n}}\) do not change if we consider increments based on the Levy process \((L_{t}-t\gamma_{0})_{t\geqslant 0}\) for some \(\gamma_{0}\in\mathbb{R}^{d}\). Hence, no generality is lost if we choose \(\gamma_{0}\) such that in the mildly ill-posed case
\[\nabla\psi=\mathrm{i}\mathcal{F}[x\nu]\qquad\qquad\text{and}\qquad\qquad \Delta\psi=-\mathcal{F}[|x|^{2}\nu] \tag{9}\]
and in the severely ill-posed case \(\gamma=0\), see Nickl et al. (2016, Lemma 12) for a similar argument in the onedimensional case.
Further, due to the infinite divisibility of \(\nu\), the decay behavior of \(\varphi_{1}\) governs that of \(\varphi_{\delta}\). In particular, we have for \(0<\delta\leqslant R\)
\[\|(1+|\cdot|_{\infty})^{-\delta\alpha}/\varphi_{\delta}\|_{\infty}\leqslant(1 \lor R)^{R}\qquad\text{and}\qquad\|\exp(-r\delta|\cdot|_{\infty}^{\alpha})/ \varphi_{\delta}\|_{\infty}\leqslant(1\lor R)^{R}\]
in the mildly and the severely ill-posed case, respectively.
### Proof of Theorem 1
We extend the proof strategy by Trabs (2015) to accomodate for the multivariate setting. To allow for the application to high-frequency observations, we carefully keep track of \(\delta\) throughout. Subsequently, we will analyze the four terms in (7).
#### 4.1.1 Controlling the linearized stochastic error
To control
\[L^{\nu}_{\delta,n}\coloneqq\mathcal{F}^{-1}\big{[}\mathcal{F}K_{h}\delta^{-1} \Delta\big{(}(\widehat{\varphi}_{\delta,n}-\varphi_{\delta})/\varphi_{\delta} \big{)}\big{]}\]
we need to get a grip on the partial derivatives of \(\widehat{\varphi}_{\delta,n}-\varphi_{\delta}\) in the Laplacian of \((\widehat{\varphi}_{\delta,n}-\varphi_{\delta})/\varphi_{\delta}\). In particular, we will show that
\[\sup_{u\in\mathbb{R}^{d}}\mathbb{E}\Big{[}\Big{|}\frac{\partial^{l}}{ \partial u_{k}^{l}}(\widehat{\varphi}_{\delta,n}-\varphi_{\delta})(u)\Big{|} \Big{]}\leqslant\sup_{u\in\mathbb{R}^{d}}\mathbb{E}\Big{[}\Big{|}\frac{ \partial^{l}}{\partial u_{k}^{l}}(\widehat{\varphi}_{\delta,n}-\varphi_{ \delta})(u)\Big{|}^{2}\Big{]}^{1/2}\stackrel{{!}}{{\lesssim}}n^{- 1/2}\delta^{(l\wedge 1)/2},\qquad l=0,1,2. \tag{10}\]
Figure 5: Heatmap of \(|\cdot|^{2}\nu\) (left) and its estimate (right) for a twodimensional variance gamma process.
Since
\[\mathbb{E}\Big{[}\Big{|}\frac{\partial^{l}}{\partial u_{k}^{l}}(\widehat{\varphi}_{ \delta,n}-\varphi_{\delta})(u)\Big{]}\Big{|}\leqslant n^{-1}\mathbb{E}[Y_{1,m}^ {2l}]=n^{-1}\Big{|}\frac{\partial^{2l}}{\partial u_{k}^{2l}}\varphi_{\delta}(0 )\Big{|}\qquad\forall u\in\mathbb{R}^{d},\]
where \(Y_{1,k}\) denotes the \(k\)-th entry of \(Y_{1}\), the case \(l=0\) is obvious and for \(l=1,2\) it remains to show
\[\Big{|}\frac{\partial^{2l}}{\partial u_{k}^{2l}}\varphi_{\delta}(0)\Big{|} \lesssim\delta.\]
In the mildly ill-posed case
\[\Big{|}\frac{\partial}{\partial u_{k}}\psi(0)\Big{|}\lesssim\int|x_{k}|\, \nu(\mathrm{d}x)\leqslant\int|x|\,\nu(\mathrm{d}x)\lesssim 1\]
and in the severely ill-posed case
\[\frac{\partial}{\partial u_{k}}\psi(0)=0.\]
The product rule for higher order derivatives yields
\[\Big{|}\frac{\partial^{2}\varphi_{\delta}}{\partial u_{k}^{2}}(0 )\Big{|} =\Big{|}\delta\varphi_{\delta}(u)\Big{(}\delta\Big{(}\frac{ \partial}{\partial u_{k}}\psi(u)\Big{)}^{2}+\frac{\partial^{2}}{\partial u_{k }^{2}}\psi(u)\Big{)}\Big{|}_{u=0}\lesssim\delta\Big{(}1+\int|x|^{2}\,\nu( \mathrm{d}x)\Big{)}\lesssim\delta\qquad\text{and}\] \[\Big{|}\frac{\partial^{4}\varphi_{\delta}}{\partial u_{k}^{4}}(0 )\Big{|} =\Big{|}\frac{\partial^{2}}{\partial u_{k}^{2}}\Big{(}\frac{ \partial^{2}\varphi_{\delta}}{\partial u_{k}^{2}}(u)\Big{)}(0)\Big{|} \lesssim\delta\sum_{j=0}^{2}\binom{2}{j}|\mathbb{E}[Y_{1,k}^{2-j}]|\Big{(}\Big{|} \frac{\partial^{j}}{\partial u_{k}^{j}}\Big{(}\frac{\partial\psi}{\partial u_ {k}}(u)\Big{)}^{2}\Big{|}_{u=0}+\Big{|}\frac{\partial^{j+2}\psi}{\partial u_{ k}^{j+2}}(0)\Big{|}\Big{)}\lesssim\delta,\]
where all emerging partial derivatives can again be absolutely and uniformly bounded using our assumptions on \(\nu\).
To simplify the notation, set \(m_{\delta,h}\coloneqq\mathcal{F}K_{h}/\varphi_{\delta}\) and recall \(x\cdot y\coloneqq\sum_{k=1}^{d}x_{k}y_{k}\) for \(x,y\in\mathbb{C}^{d}\).
For the severely ill-posed case, we have \(|\Delta\psi(u)|\lesssim 1\) and that \(|\mathrm{e}^{\mathrm{i}\langle u,x\rangle}-1|\leqslant|x||u|\) implies \(|\nabla\psi(u)|\lesssim|u|\). Together with (10), we obtain
\[\mathbb{E}\Big{[}\sup_{x^{*}\in\mathbb{R}^{d}}|L^{\nu}_{\delta,n} (x^{*})|\Big{]} \leqslant\delta^{-1}\mathbb{E}\big{[}\big{\|}\mathcal{F}^{-1}[m_{ \delta,h}\Delta(\widehat{\varphi}_{\delta,n}-\varphi_{\delta})]\big{\|}_{ \infty}\big{]}+2\mathbb{E}\big{[}\big{\|}\mathcal{F}^{-1}[m_{\delta,h}\nabla( \widehat{\varphi}_{\delta,n}-\varphi_{\delta})\cdot\nabla\psi]\big{\|}_{ \infty}\big{]}\] \[\lesssim(2\pi)^{-d}\int_{I_{h}}\Big{(}\delta^{-1}\mathbb{E}\big{[} \big{|}\Delta(\widehat{\varphi}_{\delta,n}-\varphi_{\delta})(u)\big{|}\big{]} +\mathbb{E}\big{[}\big{|}\nabla(\widehat{\varphi}_{\delta,n}-\varphi_{\delta} )(u)\cdot\nabla\psi(u)\big{|}\big{]}\] \[\lesssim\pi^{-d}T^{-1/2}\int_{I_{h}}\big{(}1+\delta|u|+\delta^{1/ 2}+\delta^{3/2}|u|^{2}\big{)}\exp(r\delta|u|_{\infty}^{\alpha})\,\mathrm{d}u\] \[\lesssim T^{-1/2}\big{(}h^{-d}+\delta h^{-d-1}+\delta^{3/2}h^{-d- 2}\big{)}\exp(r\delta h^{-\alpha}), \tag{11}\]
which will be dominated by the bias.
In the mildly ill-posed case, the stochastic error needs to be decomposed further into the main stochastic error
\[M^{\nu}_{\delta,n} \coloneqq-\frac{1}{T}\sum_{k=1}^{n}\mathcal{F}^{-1}\Big{[}m_{ \delta,h}\big{(}|Y_{k}|^{2}\mathrm{e}^{\mathrm{i}\langle u,Y_{k}\rangle}- \mathbb{E}\big{[}|Y_{k}|^{2}\mathrm{e}^{\mathrm{i}\langle u,Y_{k}\rangle}] \big{)}\Big{]}\qquad\text{and}\] \[M^{\nu}_{\delta,n}-L^{\nu}_{\delta,n} =2\mathcal{F}^{-1}\big{[}m_{\delta,h}\nabla(\widehat{\varphi}_{ \delta,n}-\varphi_{\delta})\cdot\nabla\psi\big{]}+\mathcal{F}^{-1}\big{[}m_{ \delta,h}(\widehat{\varphi}_{\delta,n}-\varphi_{\delta})\big{(}\Delta\psi- \delta(\nabla\psi)^{2}\big{)}\big{]}. \tag{12}\]
To control the difference (12), note that \(\||x|\nu\|_{\infty}\leqslant R\) and \(\||x|^{m}\nu\|_{L^{1}}\leqslant R\) imply \(\||x|\nu\|_{L^{1}}\), \(\||x|\nu\|_{L^{2}}\), \(\||x|^{2}\nu\|_{L^{2}}\lesssim 1\). Further, the support of \(\mathcal{F}K\) and the decay behavior of \(\varphi_{\delta}\) ensure
\[\|m_{\delta,h}\|_{L^{2}}^{2}\lesssim\int|\mathcal{F}K(hu)|^{2}(1+|u|)^{2\delta \alpha}\,\mathrm{d}u\lesssim(1+h^{-1})^{2\delta\alpha}h^{-d}\lesssim h^{-2 \delta\alpha-d}. \tag{13}\]
Hence, (10) and the Cauchy-Schwarz inequality together with (9) and the Plancherel theorem and (9) lead to
\[\mathbb{E}\Big{[}\sup_{x^{*}\in U}\big{|}M^{\nu}_{\delta,n}(x^{*})-L ^{\nu}_{\delta,n}(x^{*})\big{|}\Big{]} \leqslant(2\pi)^{-d}\big{(}2\mathbb{E}\big{[}\big{\|}\|m_{\delta,h} \nabla(\widehat{\varphi}_{\delta,n}-\varphi_{\delta})\cdot\nabla\psi\big{\|}_{ L^{1}}\big{]}\] \[\qquad\qquad\qquad+\mathbb{E}\big{[}\big{\|}m_{\delta,h}(\widehat {\varphi}_{\delta,n}-\varphi_{\delta})\big{(}\Delta\psi-\delta(\nabla\psi)^{2} \big{)}\big{\|}_{L^{1}}\big{]}\big{)}\] \[\lesssim n^{-1/2}\|m_{\delta,h}\|_{L^{2}}\big{(}\delta^{1/2}\| |x\|\nu\|_{L^{2}}+\|x|^{2}\nu\|_{L^{2}}+\delta d^{2}\|x\nu\|_{L^{2}}\|x\|\nu\|_ {L^{1}}\big{)}\] \[\lesssim n^{-1/2}h^{-\delta\alpha-d/2}. \tag{14}\]
Being the sum of centered i.i.d. random variables, the main stochastic error for fixed \(x\) is controlled by Bernstein's inequality as summarized in the following lemma.
**Lemma 7**.: _Let \(\alpha,R,\zeta>0,m>4\) and \(x\in\mathbb{R}^{d}\) and let the kernel satisfy (5) for \(p\geqslant 1\). If \((\Sigma,\gamma,\nu)\in\mathcal{D}^{s}(\alpha,m,U,R,\zeta)\), then there exists some constant \(c>0\) depending only on \(R,\alpha\) and \(d\) such that for any \(\kappa_{0}>0\) and any \(n\in\mathbb{N},0<\delta\leqslant R,h\in(0,1)\)_
\[\mathbb{P}\big{(}|M^{\nu}_{\delta,n}(x)|\geqslant\kappa_{0}T^{-1/2}h^{- \delta\alpha-d/2}\big{)}\leqslant 2\exp\Big{(}-\frac{c\kappa_{0}^{2}}{(1+|x|^{3}) \big{(}1+\kappa_{0}(h^{d}T)^{-1/2}\big{)}}\Big{)}.\]
To establish a uniform bound for \(x^{*}\in U\), a union bound extends this lemma to a discretization of the bounded set \(U\) and Lipschitz continuity of \(x\mapsto M^{\nu}_{\delta,n}(x)\) allows us to control the discretization error. In particular, a standard covering argument yields a discretization \(x_{1},\dots,x_{N_{n}}\in\mathbb{R}^{d}\) of \(U\) such that \(\sup_{x^{*}\in U}\min_{l=1,\dots,N_{n}}|x^{*}-x_{l}|\leqslant T^{-2}\), \(N_{n}\lesssim T^{2d}\) and \(\max_{l=1,\dots,N_{n}}|x_{l}|\leqslant C\) with some \(C>0\) indepedent of \(n\). Since
\[M^{\nu}_{\delta,n}=K_{h}*g\qquad\text{with}\qquad g\coloneqq\delta^{-1} \mathcal{F}^{-1}\big{[}1_{I_{h}}\varphi_{\delta}^{-1}\Delta(\widehat{\varphi}_ {\delta,n}-\varphi_{\delta})\big{]},\]
the fundamental theorem of calculus together with the order of the kernel ensures the Lipschitz continuity of \(M^{\nu}_{\delta,n}\) via
\[|M^{\nu}_{\delta,n}(x)-M^{\nu}_{\delta,n}(y)|=\Big{|}\int_{0}^{1 }(x-y)\cdot\nabla(K_{h}*g)(y+\tau(x-y))\,\mathrm{d}\tau\Big{|} \leqslant|x-y|\mathbb{E}[\|g\|_{\infty}]\big{\|}|\nabla K_{h}|_{1} \big{\|}_{L^{1}}\] \[\lesssim|x-y|h^{-1}\mathbb{E}[\|g\|_{\infty}].\]
Therefore, the discretization error is upper bounded by
\[\mathbb{E}\Big{[}\sup_{x^{*}\in U}\min_{l=1,\dots,N_{n}}|M^{\nu}_{\delta,n}(x^ {*})-M^{\nu}_{\delta,n}(x_{l})|\Big{]}\lesssim T^{-5/2}h^{-1}\int_{I_{h}}| \varphi_{\delta}^{-1}(u)|\,\mathrm{d}u\lesssim T^{-5/2}h^{-\delta\alpha-d-1}.\]
Combining the above with Markov's inequality yields for any \(\kappa_{0}\) such that \(2d<\frac{c}{6}\kappa_{0}^{2}/(1+C^{3})\) with \(c\) from Lemma 7 and \(T\) with \(\kappa_{0}^{2}\log(T)/(Th^{d})\leqslant 1\)
\[\mathbb{P}\Big{(}\sup_{x^{*}\in U}|M^{\nu}_{\delta,n}(x^{*})|> \kappa_{0}\Big{(}\frac{\log T}{T}\Big{)}^{1/2}h^{-\delta\alpha-d/2}\Big{)}\] \[\leqslant\frac{2}{\kappa_{0}}\Big{(}\frac{\log T}{T}\Big{)}^{-1/2 }h^{\delta\alpha+d/2}\mathbb{E}\Big{[}\sup_{x^{*}\in U}\min_{l=1,\dots,N_{n}}|M ^{\nu}_{\delta,n}(x^{*})-M^{\nu}_{\delta,n}(x_{l})|\Big{]}\] \[\qquad\qquad+2N_{n}\exp\Big{(}-\frac{c\kappa_{0}^{2}\log T}{2(1+ C^{3})\big{(}2+\kappa_{0}(\log(T)/(Th^{d}))^{1/2}\big{)}}\Big{)}\] \[\lesssim\frac{2}{\kappa_{0}}\Big{(}\frac{\log T}{T}\Big{)}^{-1/2 }h^{-d/2-1}T^{-5/2}+2\exp\Big{(}\Big{(}2d-\frac{c\kappa_{0}^{2}}{6(1+C^{3})} \Big{)}\log T\Big{)}. \tag{15}\]
The second term obviously converges to \(0\) as \(T\to\infty\). For the first term, \(3d/2\geqslant d/2+1\) implies
\[\frac{2}{\kappa_{0}}\Big{(}\frac{\log T}{T}\Big{)}^{-1/2}h^{-d/2-1}T^{-5/2} \leqslant\frac{2}{\kappa_{0}}\Big{(}\frac{\log T}{T}\Big{)}^{-1/2}h^{-3d/2}T^{- 5/2}=\frac{2}{\kappa_{0}}\Big{(}\frac{\log T}{Th^{d}}\Big{)}^{3/2}T^{-1/2}( \log T)^{-2}\]
and the right hand side converges to \(0\) by our choice of bandwidth.
Overall, (14) and (15) show
\[|L^{\nu}_{\delta,n}|=\mathcal{O}_{\mathbb{P}}\Big{(}\Big{(}\frac{\log T}{T} \Big{)}^{1/2}h^{-\delta\alpha-d/2}\Big{)},\]
which our bandwidths balance with the bias.
#### 4.1.2 Controlling the error owed to the volatility
We now consider the last term in (7). The mildly ill-posed case is trivial since \(\Sigma=0\). Turning to the severely ill-posed case, we first aim to bound \(|x|^{p+d}|K(x)|\). To this end, consider
\[|x_{k}|^{p+d}|K(x)|\leqslant\frac{1}{(2\pi)^{d}}\Big{\|}\frac{\partial^{p+d} \mathcal{F}K}{\partial u_{k}^{p+d}}\Big{\|}_{L^{1}(I_{1})}=\frac{1}{(2\pi)^{d }}\int_{I_{1}}\Big{|}\int\mathrm{e}^{\mathrm{i}\langle u,z\rangle}K(z)z_{k}^{ p+d}\,\mathrm{d}z\Big{|}\,\mathrm{d}u\lesssim 1,\]
It follows from the equivalence of norms \(|x|\lesssim|x|_{p+d}\) that
\[|x|^{p+d}|K(x)|\lesssim|x|_{p+d}^{p+d}|K(x)|=\sum_{k=1}^{d}|x_{k}|^{p+d}|K(x)| \lesssim 1. \tag{16}\]
Thus,
\[|K_{h}(x^{*})|\leqslant h^{-d}\sup_{|x|\geqslant|x^{*}|/h}|K(x)|\leqslant h^{ -d}\sup_{|x|\geqslant|x^{*}|/h}\frac{|x|^{p+d}}{|x^{*}/h|^{p+d}}|K(x)|\lesssim h ^{p}|x^{*}|^{-p-d}\]
and since \(U\) is bounded away from \(0\), this gives a uniform bound in \(x^{*}\) of the order \(h^{s}\) as \(p\geqslant s\).
#### 4.1.3 Controlling the bias
For \(x^{*}\in U\) and \(h|x|<\zeta\), we use a multivariate Taylor expansion of \(g\coloneqq|\cdot|^{2}\nu\in C^{s}(U_{\zeta})\) around \(x^{*}\) to obtain
\[g(x^{*}-hx)-g(x^{*})=\sum_{0<|\beta|_{1}<\lfloor s\rfloor}\frac{g^{(\beta)}(x^ {*})}{\beta!}(-hx)^{\beta}+\sum_{|\beta|_{1}=\lfloor s\rfloor}\frac{g^{(\beta) }(x^{*}-\tau_{x^{*}-hx}hx)}{\beta!}(-hx)^{\beta},\]
for some \(\tau_{x^{*}-hx}\in[0,1]\). The order of the kernel and the Holder regularity of \(g\) yield
\[|B^{\nu}(x^{*})| =\big{|}\big{(}K_{h}*(|\cdot|^{2}\nu)-|\cdot|^{2}\nu\big{)}(x^{*}) \big{|}\] \[=\Big{|}\int\big{(}g(x^{*}-hx)-g(x^{*})\big{)}K(x)\,\mathrm{d}x \Big{|}\] \[\leqslant\Big{|}\int_{|x|\geqslant\zeta/h}\Big{(}g(x^{*}-hx)-\sum_ {|\beta|_{1}\leqslant\lfloor s\rfloor}\frac{g^{(\beta)}(x^{*})}{\beta!}(-hx)^{ \beta}\Big{)}K(x)\,\mathrm{d}x\Big{|}\] \[\qquad\qquad+\Big{|}\int_{|x|<\zeta/h}\sum_{|\beta|_{1}=\lfloor s \rfloor}\frac{(-hx)^{\beta}}{\beta!}\big{(}g^{(\beta)}(x^{*}-\tau_{x^{*}-hx }hx)-g^{(\beta)}(x^{*})\big{)}K(x)\,\mathrm{d}x\Big{|}\] \[\lesssim\int_{|x|\geqslant\zeta/h}|g(x^{*}-hx)K(x)|\,\mathrm{d}x +\sum_{|\beta|_{1}\leqslant\lfloor s\rfloor}\frac{\|g^{(\beta)}\|_{L^{\infty}( U)}}{\beta!}\frac{h^{s}}{\zeta^{s-|\beta|_{1}}}\int_{|x|\geqslant\zeta/h}|x|^{s}|K(x)|\, \mathrm{d}x\]
\[\int_{|x|\geqslant\zeta/h}|g(x^{*}-hx)K(x)|\,\mathrm{d}x\lesssim\int_{|x|\geqslant \zeta/h}|x^{*}-hx||K(x)|\,\mathrm{d}x\lesssim h^{s}\frac{1+\zeta}{\zeta^{s}}\int|x |^{s}|K(x)|\,\mathrm{d}x.\]
#### 4.1.4 Controlling the remainder term
To bound \(R_{\delta,n}\) in (7), we first show that with \(a_{n}\) from Lemma 6
\[g\coloneqq\mathcal{F}^{-1}\big{[}\mathcal{F}K_{h}\big{(}\widehat{\Delta \psi_{n}}-\Delta\psi-\delta^{-1}\Delta((\widehat{\varphi}_{\delta,n}-\varphi_{ \delta})/\varphi_{\delta})\big{)}\big{]}=\mathcal{O}_{\mathbb{P}}\big{(}h^{-d} a_{n}\big{)}.\]
Let \(\varepsilon>0\). Owing to Lemma 6 we can choose \(N,M>0\) (wlog. \(M>\|K\|_{L^{1}}\)) such that the probability of the event \(A_{n}\coloneqq\{\sup_{|u|_{\infty}\leqslant h^{-1}}|\widetilde{g}(u)|>a_{n}M\}\) with \(\widetilde{g}\coloneqq\widehat{\Delta\psi_{n}}-\Delta\psi-\delta^{-1}\Delta(( \widehat{\varphi}_{\delta,n}-\varphi_{\delta})/\varphi_{\delta})\) is less than \(\varepsilon\) for \(n>N\). Due to the support of \(\mathcal{F}K\), we have on \(A_{n}^{\varepsilon}\)
\[|g(x^{*})|=\frac{1}{(2\pi)^{d}}\Big{|}\int_{I_{h}}\mathrm{e}^{-\mathrm{i}(u, x^{*})}\mathcal{F}K(hu)\widetilde{g}(u)\,\mathrm{d}u\Big{|}\leqslant\frac{a_{n}M}{(2 \pi)^{d}}\int_{I_{h}}|\mathcal{F}K(hu)|\,\mathrm{d}u\leqslant h^{-d}a_{n}M\|K \|_{L^{1}}.\]
For \(M^{\prime}\coloneqq M\|K\|_{L^{1}}\), we obtain
\[\mathbb{P}\Big{(}\sup_{x^{*}\in U}|g(x^{*})|>h^{-d}a_{n}M^{\prime}\Big{)} \leqslant\varepsilon,\]
whereby the remainder term has the order proposed in (8).
In the mildly ill-posed case, we have \(\|\varphi_{\delta}^{-1}\|_{L^{\infty}(I_{h})}\lesssim h^{-\delta\alpha}\) and (9) implies \(\big{\|}|\nabla\psi|\big{\|}_{L^{\infty}(I_{h})}\lesssim 1\). Thus, we have
\[|R_{\delta,n}|=\mathcal{O}_{\mathbb{P}}\big{(}n^{-1}\delta^{-1/2}(\log h^{-1} )^{1+\chi}h^{-2\delta\alpha-d}\big{)}.\]
In the severely ill-posed case, \(\|\varphi_{\delta}^{-1}\|_{L^{\infty}(I_{h})}\lesssim\exp(r\delta h^{-\alpha})\) holds and (1) implies \(\big{\|}|\nabla\psi|\big{\|}_{L^{\infty}(I_{h})}\lesssim h^{-1}\). Hence,
\[|R_{\delta,n}|=\mathcal{O}_{\mathbb{P}}\big{(}n^{-1}\delta^{-1/2}(\log h^{-1} )^{1+\chi}\big{(}h^{-d}+\delta h^{-d-1}+\delta^{3/2}h^{-d-2}\big{)}\exp(2r \delta h^{-\alpha})\big{)}.\]
In both cases, the remainder term is dominated by the linearized stochastic error.
This completes the proof of Theorem 1.
### Proof of Proposition 2
For the modified estimator, we have to replace \(\operatorname{tr}(\Sigma)K_{h}\) with \(\big{(}\operatorname{tr}(\Sigma)-\widehat{\operatorname{tr}(\Sigma)}\big{)}K_{h}\) in the decomposition (7). All other terms are treated as before. Since we can bound \(|K_{h}(x^{*})|\) by \(h^{-d}\) uniformly in \(x^{*}\) and \(\delta\leqslant R\) is fixed, we only need to prove
\[\big{|}\widehat{\operatorname{tr}(\Sigma)}-\operatorname{tr}(\Sigma)\big{|}= \mathcal{O}_{\mathbb{P}}\big{(}(\log n)^{-(s+d)/\alpha}\big{)}.\]
Similarly to (7), the error for estimating the trace of \(\Sigma\) can be decomposed into
\[\operatorname{tr}(\Sigma)-\widehat{\operatorname{tr}(\Sigma)} =\int W_{h}(u)\big{(}\widehat{\Delta\psi_{n}}-\Delta\psi\big{)}(u )\operatorname{d}u+\int W_{h}(u)\big{(}\Delta\psi(u)+\operatorname{tr}(\Sigma) \big{)}\operatorname{d}u\] \[=\underbrace{\int W_{h}(u)\delta^{-1}\Delta\Big{(}\frac{\widehat {\varphi}_{\delta,n}(u)-\varphi_{\delta}(u)}{\varphi_{\delta}(u)}\Big{)}(u) \operatorname{d}u}_{=\widetilde{L}^{\nu}_{\delta,n}}+\widetilde{R}_{\delta,n} -\underbrace{\int W_{h}(u)\mathcal{F}[|x|^{2}\nu](u)\operatorname{d}u}_{= \widetilde{B}^{\nu}_{h}}\]
with the linearized stochastic error \(\widetilde{L}^{\nu}_{\delta,n}\), the bias \(\widetilde{B}^{\nu}_{h}\) and a remainder term \(\widetilde{R}_{\delta,n}\). Using the techniques from Section 4.1.1, it is straightforward to see
\[\mathbb{E}[|\widetilde{L}^{\nu}_{\delta,n}|] \lesssim\delta^{-1}n^{-1/2}\int|W_{h}(u)||\varphi_{\delta}^{-1}(u )|\big{(}\delta^{1/2}+\delta^{3/2}|\nabla\psi(u)|+\delta^{2}|\nabla\psi(u)|^{2 }+\delta\big{)}\operatorname{d}u.\] \[\lesssim\delta^{-1}n^{-1/2}\|\varphi_{\delta}^{-1}\|_{L^{\infty}( I_{h})}\big{(}\delta^{1/2}+\delta^{3/2}h^{-1}+\delta^{2}h^{-2}\big{)}\int|W_{h}(u)| \operatorname{d}u\] \[\lesssim n^{-1/2}\exp(r\delta h^{-\alpha})h^{-2}\int|W(u)| \operatorname{d}u,\]
which is of the order \(n^{-1/4}(\log n)^{2/\alpha}\) by our choice of \(h\) and will be dominated by the bias.
Using the Plancherel theorem in a similar fashion to Belomestny & Reiss (2015, Section 4.2.1), we have for \(g\coloneqq|\cdot|^{2}\nu\) and \(\beta\in\mathbb{N}_{0}^{d}\) with \(\beta_{l}=s^{\mathbbm{1}}_{\{l=k\}}\)
\[|\widetilde{B}^{\nu}_{h}|\lesssim\Big{|}\int\mathcal{F}^{-1}[W_{h}(x)/x_{k}^{s }]\mathcal{F}^{-1}\big{[}x_{k}^{s}\mathcal{F}g(x)\big{]}(u)\operatorname{d}u \Big{|}\lesssim\|g^{(\beta)}\|_{\infty}\big{\|}\mathcal{F}^{-1}[W_{h}(x)/x_{k}^ {s}]\|_{L^{1}}.\]
By substitution
\[\mathcal{F}^{-1}[W_{h}(x)/x_{k}^{s}](u)=\frac{h^{s}}{(2\pi)^{d}}\mathcal{F}^{-1 }\big{[}W(x)/x_{k}^{s}\big{]}(u/h)\]
and therefore
\[|\widetilde{B}^{\nu}_{h}|\lesssim h^{*}\|g^{(\beta)}\|_{\infty}\big{\|} \mathcal{F}^{-1}[W(x)/x_{k}^{s}](\cdot/h)\|_{L^{1}}\lesssim h^{(s+d)}\|g^{( \beta)}\|_{\infty}\big{\|}\mathcal{F}^{-1}[W(x)/x_{k}^{*}]\|_{L^{1}}\lesssim h^ {(s+d)}.\]
Together with Lemma 6, we have
\[|\widetilde{R}_{\delta,n}| \lesssim\sup_{|u|_{\infty}\leqslant h^{-1}}\big{|}\widehat{ \Delta\psi_{n}}(u)-\Delta\psi(u)-\delta^{-1}\Delta\big{(}(\widehat{\varphi}_{ \delta,n}-\varphi_{\delta})/\varphi_{\delta}\big{)}(u)\big{|}\int|W_{h}(u)| \operatorname{d}u\] \[=\mathcal{O}_{\mathbb{P}}\big{(}n^{-1}(\log h^{-1})^{1+\chi}\| \varphi_{\delta}^{-1}\|_{L^{\infty}(I_{h})}^{2}h^{-2}\big{)}.\]
which is dominated by the linearized stochastic error.
### Proof of Theorem 3
The distributional analogon to (7) is
\[\int f(x)|x|^{2}\widehat{\nu}_{h}(x)(\operatorname{d}x)-\int f(x)|x|^{2}\,\nu( \operatorname{d}x)\]
\[=\int f(x)\big{(}K_{h}*(|\cdot|^{2}\nu)\big{)}(x)\,{\rm d}x-\int f(y) |y|^{2}\nu({\rm d}y)\] \[\qquad+\int f(x){\cal F}^{-1}\big{[}{\cal F}K_{h}(\Delta\psi- \widehat{\Delta\psi}_{n})\big{]}(x)\,{\rm d}x+\int f(x)\,{\rm tr}(\Sigma)K_{h}(x )\,{\rm d}x\] \[=\int f(x)\big{(}K_{h}*(|\cdot|^{2}\nu)\big{)}(x)\,{\rm d}x-\int f( y)|y|^{2}\nu({\rm d}y)\] \[\qquad+\int f(x)L^{\nu}_{\delta,n}(x)\,{\rm d}x+\int f(x)R_{\delta,n}\,{\rm d}x+\int f(x)\,{\rm tr}(\Sigma)K_{h}(x)\,{\rm d}x\]
with the same \(L^{\nu}_{\delta,n}\) and \(R_{\delta,n}\), for which we will derive uniform upper bounds on \(U\) which directly translate into bounds when integrating against test functions due to their regularity. For the integrated bias, we use Fubini's theorem to obtain
\[B^{\nu}_{I} =\int f(x)\big{(}K_{h}*(|\cdot|^{2}\nu)\big{)}(x)\,{\rm d}x-\int f (y)|y|^{2}\nu({\rm d}y)\] \[=\int f(x)\Big{(}\int K_{h}(x-y)|y|^{2}\,\nu({\rm d}y)\Big{)}\,{ \rm d}x-\int f(y)|y|^{2}\,\nu({\rm d}y)\] \[=\int\big{(}\big{(}f*K_{h}(-\cdot)\big{)}(y)-f(y)\big{)}|y|^{2}\, \nu({\rm d}y)\] \[=\int\big{(}\big{(}K_{h}(-\cdot)*f\big{)}(y)-f(y)\big{)}|y|^{2}\, \nu({\rm d}y).\]
\(\big{(}\big{(}K_{h}(-\cdot)*f\big{)}(y)-f(y)\big{)}\) is of the order \((|y|\lor 1)h^{\varrho}\) which follows from the arguments in (7) with \(g=f\), \(\varrho\) and \(K_{h}(-\cdot)\) instead of \(|\cdot|^{2}\nu\), \(s\) and \(K_{h}\), respectively. Therefore,
\[|B^{\nu}_{I}|\lesssim h^{\varrho}\int(|y|\lor 1)|y|^{2}\,\nu({\rm d}y)\lesssim h ^{\varrho}\int|y|^{3}\,\nu({\rm d}y)\lesssim h^{\varrho}.\]
A key tool to control linearized stochastic error \(L^{\nu}_{\delta,n}\) in Section 4.1.1 was (10), which we can still establish here by bounding the first four partial derivatives of \(\psi\) at the origin. Indeed, by (2) we have \(\frac{\partial\psi}{\partial u_{k}}(0)=0\) and similarly
\[\Big{|}\frac{\partial\psi}{\partial u_{k}^{j}}(u)\Big{|}\leqslant{\rm tr}( \Sigma)+\int|x|^{j}\,\nu({\rm d}x)={\rm tr}(\Sigma)+\sum_{l=1}^{2}\int|x_{l}|^{ j}\,\nu_{l}({\rm d}x_{l})\lesssim 1,\quad j=2,3,4,\,k=1,\ldots,d. \tag{17}\]
Hence, (10) holds. Additionally, (17) implies \(|\Delta\psi(u)|\lesssim 1\).
In the severely ill-posed case, we can still bound the gradient of \(\psi\) by
\[|\nabla\psi(u)|\lesssim|\Sigma u|+\int|x||{\rm e}^{{\rm i}(u,x)}-1|\,\nu({\rm d }x)\leqslant|u|+\int|\langle u,x\rangle||x|\,\nu({\rm d}x)\lesssim|u|,\]
and then apply the arguments from (11). Hence, the linearized stochastic error is of the same order as before.
In the severely ill-posed case, (17) holds even for \(j=1\) and therefore \(|\nabla\psi(u)|\lesssim 1\). Continuing from (10) in the mildly ill-posed case requires the most significant changes. (9) now reads as
\[\nabla\psi(u)=\big{(}{\rm i}{\cal F}[x^{(1)}\nu_{1}](u^{(1)}),{\rm i}{\cal F} [x^{(2)}\nu_{2}](u^{(2)})\big{)}^{\top}.\]
and the main crux is that
\[\int{\rm e}^{{\rm i}(u^{(k)},x^{(k)})}|x^{(k)}|^{j}\,\nu_{k}({\rm d}x^{(k)}), \qquad j,k=1,2\]
are constant in half of their arguments. Therefore, they cannot be finitely integrable as functions on \(\mathbb{R}^{d}\). In (12), a way out is to consider
\[\big{\|}m_{\delta,h}|\nabla\psi|\big{\|}_{L^{1}}\leqslant\sum_{k=1}^{2}\big{\|}m_ {\delta,h}(u)|\mathcal{F}[x^{(k)}\nu_{k}](u^{(k)})|\big{\|}_{L^{1}}. \tag{18}\]
Then, we apply the Cauchy-Schwarz inequality and Plancherel's theorem only on \(L^{2}(\mathbb{R}^{d/2})\) to obtain
\[\big{\|}m_{\delta,h}(u)|\mathcal{F}[x^{(1)}\nu_{1}](u^{(1)})| \big{\|}_{L^{1}}\] \[=\int\int\Big{|}\frac{\mathcal{F}K(hu)}{\varphi_{\delta}(u)}| \mathcal{F}[x^{(1)}\nu_{1}](u^{(1)})|\Big{|}\,\mathrm{d}u^{(1)}\,\mathrm{d}u^{ (2)}\] \[\leqslant\||\mathcal{F}[x^{(1)}\nu_{1}]|\|_{L^{2}(\mathbb{R}^{d/ 2})}\int_{[-h^{-1},h^{-1}]^{d/2}}\Big{(}\int_{[-h^{-1},h^{-1}]^{d/2}}\big{|} \varphi_{\delta}^{-1}(u)\big{|}^{2}\,\mathrm{d}u^{(1)}\Big{)}^{1/2}\,\mathrm{ d}u^{(2)}\] \[\lesssim h^{-\delta\alpha-3d/4}. \tag{19}\]
Analogously, the second summand in (18) has the same order. As a direct consequence,
\[\big{\|}m_{\delta,h}|\nabla\psi|^{2}\big{\|}_{L^{1}}\leqslant\sum_{k=1}^{2} \||x^{(k)}|\nu_{k}\|_{L^{1}}\big{\|}m_{\delta,h}|\nabla\psi|\big{\|}_{L^{1}} \lesssim h^{-\delta\alpha-3d/4}\]
and similarly to (18) and (19) the same holds for \(\|m_{\delta,h}\Delta\psi\|_{L^{1}}\). Recalling (12), we have
\[\mathbb{E}\Big{[}\sup_{x^{\prime}\in U}\big{|}M_{\delta,n}^{\nu} (x^{*})-L_{\delta,n}^{\nu}(x^{*})\big{|}\Big{]} \leqslant(2\pi)^{-d}\big{(}2\mathbb{E}\big{[}\big{\|}m_{\delta,h} \nabla(\widehat{\varphi}_{\delta,n}-\varphi_{\delta})\cdot\nabla\psi\big{\|}_ {L^{1}}\big{]}\] \[\lesssim n^{-1/2}h^{-\delta\alpha-3d/4}.\]
Note that we pay for the dependence struture with an additional \(h^{-d/4}\) compared to (14). The same happens when applying Bernstein's inequality to obtain the following adaptation of Lemma 7.
**Lemma 8**.: _Let \(\alpha,R,\zeta>0,m>4,U\subseteq\mathbb{R}^{d}\) and \(x\in\mathbb{R}^{d}\), let the kernel have product structure and satisfy (5) for \(p\geqslant 1\). If \((\Sigma,\gamma,\nu)\in\widetilde{\mathcal{D}}(\alpha,m,R)\), then there exists some constant \(c>0\) depending only on \(R,\alpha\) and \(d\) such that for any \(\kappa_{0}>0\) and any \(n\in\mathbb{N},0<\delta\leqslant R,h\in(0,1)\)_
\[\mathbb{P}\big{(}|M_{\delta,n}^{\nu}(x)|\geqslant\kappa_{0}T^{-1/2}h^{-\delta \alpha-3d/4}\big{)}\leqslant 2\exp\Big{(}-\frac{c\kappa_{0}^{2}}{(1+|x|^{3})(1+ \kappa_{0}(Th^{d/2})^{-1/2})}\Big{)}.\]
Carrying out the discretization argument from before, the linearized stochastic error in the mildly ill-posed case is of the order
\[|L_{\delta,n}^{\nu}|=\mathcal{O}_{\mathbb{P}}\Big{(}\Big{(}\frac{\log T}{T} \Big{)}^{1/2}h^{-\delta\alpha-3d/4}\Big{)}.\]
The term \(\mathrm{tr}(\Sigma)K_{h}\) is treated as in Section 4.1.2 just with \(\varrho\) instead of \(s\). No changes are necessary to treat the remainder term compared to Section 4.1.4. This is because when treating the linearized stochstic error, we already showed that still \(|\nabla\psi(u)|,|\Delta\psi(u)|\lesssim 1\) in the mildly ill-posed case and \(|\nabla\psi(u)|\lesssim|u|,|\Delta\psi(u)|\lesssim 1\) in the severely ill-posed case. This concludes the proof of Theorem 3.
### Remaining proofs
#### 4.4.1 Proof of Proposition 5
The proof uses empirical process theory and is a combination of Kappus & Reiss (2010) and Belomestny & Trabs (2018).
To simplify the notation, write
\[C^{\beta}_{\rho,n}(u)\coloneqq n^{-1/2}\rho^{-(|\beta|_{1}\wedge 1)/2}\sum_{k=1}^{n }\frac{\partial^{\beta}}{\partial u^{\beta}}\big{(}\mathrm{e}^{\mathrm{i}\langle u,X_{k}\rangle}-\mathbb{E}[\mathrm{e}^{\mathrm{i}\langle u,X_{k}\rangle}]\big{)}\]
so that the assertion reads
\[\sup_{\begin{subarray}{c}n\geq 1,\\ 0<\rho\leq R\end{subarray}}\mathbb{E}\big{[}\big{\|}w(u)C^{\beta}_{\rho,n}(u) \big{\|}_{\infty}\big{]}<\infty.\]
We decompose \(C^{\beta}_{\rho,n}\) into its real and its imaginary part to obtain
\[\mathbb{E}\big{[}\big{\|}w(u)C^{\beta}_{\rho,n}(u)\big{\|}_{\infty}\big{]} \leqslant\mathbb{E}\big{[}\big{\|}w(u)\operatorname{Re}\big{(}C^{\beta}_{ \rho,n}(u)\big{)}\big{\|}_{\infty}\big{]}+\mathbb{E}\big{[}\big{\|}w(u) \operatorname{Im}\big{(}C^{\beta}_{\rho,n}(u)\big{)}\big{\|}_{\infty}\big{]}.\]
As both parts can be treated analogously, we focus on the real part. To this end, introduce the class of
\[\mathcal{G}_{\rho,\beta}\coloneqq\{g_{u}:u\in\mathbb{R}^{d}\}\qquad\text{ where}\qquad g_{u}\colon\mathbb{R}^{d}\to\mathbb{R},\qquad x\mapsto w(u)\rho^{-(|\beta|_{1} \wedge 1)}\,\frac{\partial^{\beta}}{\partial u^{\beta}}\cos(\langle u,x\rangle).\]
Since \(G=\rho^{-(|\beta|_{1}\wedge 1)/2}|\cdot|^{\beta}\) is an envelope function for \(\mathcal{G}_{\rho,\beta}\), van der Vaart (1998, Corollary 19.35) yields
\[\mathbb{E}\big{[}\big{\|}w(u)\operatorname{Re}\big{(}C^{\beta}_{\rho,n}(u) \big{)}\big{\|}_{\infty}\big{]}\lesssim J_{\|}\big{(}\mathbb{E}[G(X_{1})^{2} ]^{1/2},\mathcal{G}_{\rho,\beta}\big{)}\coloneqq\int_{0}^{\mathbb{E}[G(X_{1}) ^{2}]^{1/2}}\sqrt{\log N_{\|}(\varepsilon,\mathcal{G}_{\rho,\beta})}\,\mathrm{ d}\varepsilon,\]
where \(N_{\|}(\varepsilon,\mathcal{G}_{\rho,\beta})\) is the minimal number of \(\varepsilon\)-brackets (with respect to the distribution of \(X_{1}\)) needed to cover \(\mathcal{G}_{\rho,\beta}\).
Since \(|g_{u}(x)|\leqslant w(u)\rho^{-(|\beta|_{1}\wedge 1)/2}|x|^{\beta}\), the set \(\{g_{u}:|u|>B\}\) is covered by the bracket
\[[g_{0}^{-},g_{0}^{+}]\coloneqq\{g\colon\mathbb{R}^{d}\to \mathbb{R}\mid g_{0}^{-}(x)\leqslant g(x)\leqslant g_{0}^{+}(x)\,\forall x \in\mathbb{R}^{d}\}\qquad\text{for}\] \[g_{0}^{\pm}\coloneqq\pm\varepsilon\rho^{-(|\beta|_{1}\wedge 1)/2}| \cdot|^{\beta}\qquad\text{and}\qquad B\coloneqq B(\varepsilon)\coloneqq\inf \big{\{}b>0:\sup_{|u|\geqslant b}w(u)\leqslant\varepsilon\big{\}}.\]
To cover \(\{g_{u}:|u|\leqslant B\}\), we use for some grid \((u_{\rho,j})_{j\geqslant 1}\subseteq\mathbb{R}^{d}\) the functions
\[g_{\rho,j}^{\pm}\coloneqq\rho^{-(|\beta|_{1}\wedge 1)/2}\big{(}w(u_{\rho,j}) \frac{\partial^{\beta}}{\partial u_{\rho,j}^{\beta}}\cos(\langle u_{\rho,j}, \cdot\rangle)\pm\varepsilon|\cdot|^{\beta}\big{)}\mathbbm{1}_{\{|\cdot| \leqslant M\}}\pm\rho^{-(|\beta|_{1}\wedge 1)/2}|\cdot|^{\beta}\mathbbm{1}_{\{|\cdot|>M\}}.\]
where \(M\coloneqq\inf\{m:\rho^{-(|\beta|_{1}\wedge 1)}\mathbb{E}[|X_{1}|^{2\beta} \mathbbm{1}_{\{|X_{1}|>m\}}]\leqslant\varepsilon^{2}\}\). Owing to \(\mathbb{E}[|X_{1}|^{2\beta}]\lesssim\rho^{|\beta|_{1}\wedge 1}\), we have
\[\mathbb{E}\big{[}[g_{j}^{+}(X_{1})-g_{j}^{-}(X_{1})|^{2}\big{]}\leqslant 4 \varepsilon^{2}\big{(}\rho^{-(|\beta|_{1}\wedge 1)}\mathbb{E}[|X_{1}|^{2\beta}]+1 \big{)}\leqslant c\varepsilon^{2}\]
for some \(c>0\). Denote by \(C\) the Lipschitz constant of \(w\) and use the triangle inequality to see
\[\Big{|}w(u)\frac{\partial^{\beta}}{\partial u^{\beta}}\cos(\langle u,x\rangle )-w(u_{j})\frac{\partial^{\beta}}{\partial u_{\rho,j}^{\beta}}\cos(\langle u_{\rho,j}, x\rangle)\Big{|}\leqslant|x|^{\beta}(C+|x|)|u-u_{\rho,j}|.\]
Thus, \(g_{u}\in[g_{j}^{-},g_{j}^{+}]\) as soon as \((C+M)|u-u_{\rho,j}|\leqslant\varepsilon\). It takes at most \((\lceil B/\varepsilon_{0}\rceil)^{d}\)\(\ell^{2}\)-balls of radius \(d^{1/2}\varepsilon_{0}\) to cover the \(\ell^{2}\)-ball of radius \(B\) around \(0\). For \(\varepsilon_{0}=\varepsilon d^{-1/2}/(C+M)\), denote their centers by \((u_{\rho,j})_{j}\). To translate this into a cover of \(\{g_{u}:|u|\leqslant B\}\), we fix some \(g_{u}\) with \(|u|\leqslant B\). By construction, we can pick \(j\) such that \(|u-u_{\rho,j}|\leqslant d^{1/2}\varepsilon_{0}=\varepsilon/(C+M)\). The previous calculations show that \([g_{j}^{-},g_{j}^{+}]\) is a \(c^{1/2}\varepsilon\)-bracket containing \(g_{u}\) and therefore
\[N_{\|}(\varepsilon,\mathcal{G}_{\rho,\beta})\leqslant\big{(}\big{\lceil} \varepsilon^{-1}(cd)^{1/2}B(C+M)\big{\rceil}\big{)}^{d}+1.\]
It is straightforward to see \(B\leqslant\exp(\varepsilon^{-2/(1+\chi)})\). Further, \(m=(\varepsilon^{-2}\rho^{-(|\beta|_{1}\wedge 1)}\mathbb{E}[|X_{1}|^{2\beta}|X_{1}|^{ \tau}])^{1/\tau}\) is sufficient for
\[\rho^{-(|\beta|_{1}\wedge 1)}\mathbb{E}[|X_{1}|^{2\beta}\mathbb{1}_{\{|X_{1}| >m\}}]\leqslant m^{-\tau}\mathbb{E}[|X_{1}|^{2\beta}|X_{1}|^{\tau}]\leqslant \varepsilon^{2}\]
and thus \(M\leqslant(\varepsilon^{-2}\rho^{-(|\beta|_{1}\wedge 1)}\mathbb{E}[|X_{1}|^{2 \beta}|X_{1}|^{\tau}])^{1/\tau}\leqslant(\varepsilon^{-2}dc^{\prime})^{1/\tau}\) for some \(c^{\prime}>0\). Hence,
\[\log N_{\parallel}(\varepsilon,\mathcal{G}_{\rho,\beta})\lesssim 1+\log( \varepsilon^{-2/\tau-1})+\varepsilon^{-2/(1+\tau)}\lesssim 1+\varepsilon^{-2/(1+ \tau)}\]
implying
\[J_{\parallel}\big{(}\mathbb{E}[G(X_{1})^{2}]^{1/2},\mathcal{G}_{\rho,\beta} \big{)}=\int_{0}^{(\rho^{-(|\beta|_{1}\wedge 1)}\mathbb{E}[|X_{1}|^{2\beta}])^{1/2}} \sqrt{\log N_{\parallel}(\varepsilon,\mathcal{G}_{\rho,\beta})}\,\mathrm{d} \varepsilon<\infty.\qed\]
#### 4.4.2 Proof of Lemma 6
Setting \(g(y)=\log(1+y)\) (ie. \(g^{\prime}(y)=(1+y)^{-1},g^{\prime\prime}(y)=-(1+y)^{-2}\)) and \(\xi=(\widehat{\varphi}_{\delta,n}-\varphi_{\delta})/\varphi_{\delta}\), we use
\[\nabla(g\circ\xi)(u)=g^{\prime}(\xi(u))\nabla\xi(u),\qquad\Delta(g\circ\xi)(u )=g^{\prime\prime}(\xi(u))(\nabla\xi)^{2}(u)+g^{\prime}(\xi(u))\Delta\xi(u)\]
and \(|(\nabla\xi)^{2}(u)|\leqslant|\nabla\xi(u)|^{2}\) to obtain for \(|\xi(u)|\leqslant 1/2\) that
\[\big{|}\Delta(g\circ\xi)(u)-\Delta\xi(u)\big{|}\leqslant|g^{\prime\prime}(\xi (u))||\nabla\xi(u)|^{2}+|g^{\prime}(\xi(u))-1||\Delta\xi(u)|\lesssim|\nabla \xi(u)|^{2}+|\xi(u)||\Delta\xi(u)|, \tag{20}\]
because
\[|g^{\prime}(y)-1|\leqslant 2|y|\qquad\text{and}\qquad|g^{\prime\prime}(y)| \leqslant 4\qquad\forall y\in\mathbb{C}:|y|\leqslant 1/2.\]
The latter statement holds, since \(1/2\leqslant|1+y|\). For the former statement, consider the expansion
\[g^{\prime}(y)=\frac{1}{1+y}=\sum_{k=0}^{\infty}(-y)^{k}\qquad\forall y\in \mathbb{C}:|y|\leqslant 1/2\]
to see
\[|g^{\prime}(y)-1|=\Big{|}\sum_{k=1}^{\infty}(-y)^{k}\Big{|}=\Big{|}-y\sum_{k=0 }^{\infty}(-y)^{k}\Big{|}=\Big{|}\frac{y}{1+y}\Big{|}\leqslant 2|y|.\]
Noting \(\widehat{\Delta\psi_{n}}-\Delta\psi=\delta^{-1}\Delta\log(\widehat{\varphi}_ {\delta,n}/\varphi_{\delta})\), (20) implies on the event \(\Omega_{n}:=\big{\{}\sup_{|u|_{\infty}\leqslant h^{-1}}|\xi(u)|\leqslant 1/2\big{\}}\)
\[\sup_{|u|_{\infty}\leqslant h^{-1}}\big{|}\delta(\widehat{\Delta\psi_{n}}- \Delta\psi)(u)-\Delta((\widehat{\varphi}_{\delta,n}-\varphi_{\delta})/\varphi_ {\delta})(u)\big{|}\lesssim\||\nabla\xi||_{L^{\infty}(I_{h})}^{2}+\|\xi\|_{L^ {\infty}(I_{h})}\|\Delta\xi\|_{L^{\infty}(I_{h})}.\]
To control the \(\xi\)-terms, we invoke Proposition 5 applied to the increments of the Levy process with \(\rho=\delta\) after verifying that the moments are of the appropriate order. Owing to the equivalence of norms, it is sufficient to show that with \(\tau=m-4>0\)
\[\mathbb{E}[|Y_{1,k}|^{2l+\tau}]\lesssim\delta^{l\wedge 1}\qquad\text{and} \qquad\mathbb{E}[|Y_{1,k}|^{2l}]\lesssim\delta^{l\wedge 1},\qquad k=1,\dots,d,\,l=0,1,2, \tag{21}\]
where \(Y_{1,k}\) is the \(k\)-th entry of \(Y_{1}\) and thus an increment with time difference \(\delta\) based on the Levy process \((L_{t,k})_{t\geqslant 0}\) with Levy measure \(\nu_{k}\). For \(l=1,2\), it follows from Figueroa-Lopez (2008, Theorem 1.1) that
\[\lim_{\delta\searrow 0}\delta^{-1}\mathbb{E}[|Y_{1,k}|^{2l+\tau}]=\lim_{\delta \searrow 0}\delta^{-1}\mathbb{E}[|L_{\delta,k}|^{2l+\tau}]=\int|x_{k}|^{2l+\tau }\,\nu_{k}(\mathrm{d}x_{k})\leqslant\int|x|^{2l+\tau}\,\nu(\mathrm{d}x) \lesssim R.\]
For \(l=0\), \(\mathbb{E}[|Y_{1,k}|^{\tau}]\lesssim\mathbb{E}[|Y_{1,k}|^{m}]\lesssim 1\) holds by our moment assumptions. The second condition in (21) was already checked at the beginning of Section 4.1.1.
Therefore, \(|\Delta\psi(u)|\lesssim 1\) yields
\[\|\xi\|_{L^{\infty}(I_{h})} =\mathcal{O}_{\mathbb{P}}\big{(}n^{-1/2}(\log h^{-1})^{(1+\chi)/2} \|\varphi_{\delta}^{-1}\|_{L^{\infty}(I_{h})}\big{)}, \tag{22}\] \[\||\nabla\xi||_{L^{\infty}(I_{h})}^{2} =\mathcal{O}_{\mathbb{P}}\big{(}n^{-1}(\log h^{-1})^{1+\chi}\| \varphi_{\delta}^{-1}\|_{L^{\infty}(I_{h})}^{2}\big{(}\delta+\delta^{2}\|| \nabla\psi|\|_{L^{\infty}(I_{h})}^{2}\big{)}\big{)},\] \[\|\Delta\xi\|_{L^{\infty}(I_{h})} =\mathcal{O}_{\mathbb{P}}\big{(}n^{-1/2}(\log h^{-1})^{(1+\chi)/2} \|\varphi_{\delta}^{-1}\|_{L^{\infty}(I_{h})}\big{(}\delta^{1/2}+\delta^{3/2} \big{\|}|\nabla\psi|\big{\|}_{L^{\infty}(I_{h})}+\delta^{2}\big{\|}|\nabla\psi| \big{\|}_{L^{\infty}(I_{h})}^{2}\big{)}\big{)}.\]
Combining (22) with \(n^{-1/2}(\log h^{-1})^{(1+\chi)/2}\|\varphi_{\delta}^{-1}\|_{L^{\infty}(I_{h})}\to 0\) gives \(\mathbb{P}(\Omega_{n})\to 1\), completing the proof.
#### 4.4.3 Proof of Lemma 7
For fixed \(x\in\mathbb{R}^{d}\), we want to apply Bernstein's inequality to
\[M^{\nu}_{\delta,n}(x)=-\sum_{l=1}^{n}\big{(}\xi_{l}-\mathbb{E}[\xi_{l}]\big{)} \qquad\text{with}\qquad\xi_{l}\coloneqq T^{-1}\mathcal{F}^{-1}\big{[}m_{\delta, h}(u)|Y_{l}|^{2}\mathrm{e}^{\mathrm{i}\langle u,Y_{l}\rangle}\big{]}(x).\]
Similar arguments to (13) reveal \(\big{\|}m_{\delta,h}(u)\big{\|}_{L^{1}}\lesssim h^{-\delta\alpha-d}\), and with the quotient rule one finds the same order for \(\big{\|}\Delta m_{\delta,h}(u)\big{\|}_{L^{1}}\) and \(\big{\|}\nabla m_{\delta,h}\big{\|}_{L^{1}}\) paving a deterministic bound of \(\xi_{l}\) via
\[|\xi_{l}| =T^{-1}|Y_{l}|^{2}\big{|}\mathcal{F}^{-1}\big{[}m_{\delta,h}(u) \mathrm{e}^{-\mathrm{i}\langle u,x\rangle}\big{]}(-Y_{l})\big{|}\] \[=T^{-1}\big{|}\mathcal{F}^{-1}\big{[}\Delta\big{(}m_{\delta,h}(u )\mathrm{e}^{-\mathrm{i}\langle u,x\rangle}\big{)}\big{]}(-Y_{l})\big{|}\] \[\leqslant T^{-1}\big{\|}\Delta\big{(}m_{\delta,h}(u)\mathrm{e}^{- \mathrm{i}\langle u,x\rangle}\big{)}\big{\|}_{L^{1}}\] \[\leqslant T^{-1}\big{(}\big{\|}\Delta m_{\delta,h}(u)\big{\|}_{L ^{1}}+2|x|_{1}\big{\|}\big{\|}\nabla m_{\delta,h}\big{\|}_{L^{1}}+|x|^{2}\|m_{ \delta,h}\|_{L^{1}}\big{)}\] \[\lesssim T^{-1}(1+|x|^{2})h^{-\delta\alpha-d}.\]
To bound the variance of \(\xi_{l}\), note that for the distribution \(\mathbb{P}_{\delta}\) of \(Y_{1}\), we have
\[\mathcal{F}[\mathrm{i}z_{k}\mathbb{P}_{\delta}]=\frac{\partial\varphi_{\delta }}{\partial u_{k}}=\delta\varphi_{\delta}\frac{\partial\psi}{\partial u_{k}}= \delta\mathcal{F}[\mathrm{i}z_{k}\nu]\varphi_{\delta}=\mathcal{F}[\mathcal{F}^ {-1}[\delta\mathcal{F}[\mathrm{i}z_{k}\nu]\varphi_{\delta}]]=\mathcal{F}[( \delta\mathrm{i}z_{k}\nu)*\mathbb{P}_{\delta}]\]
and therefore \(z_{k}\mathbb{P}_{\delta}=\delta\mu*\mathbb{P}_{\delta}\) with \(\mu(\mathrm{d}z)=z_{k}\nu(z)\mathrm{d}z\). It follows that
\[\int g(z)|z_{k}|\,\mathbb{P}_{\delta}(\mathrm{d}z)\leqslant\delta\|z_{k}\nu \|_{\infty}\|g\|_{L^{1}},\qquad\forall g\in L^{1}(\mathbb{R}^{d}).\]
Again, using similar arguments to (13) and the quotient rule, we also have \(\big{\|}\Delta m_{\delta,h}(u)\big{\|}_{L^{2}},\big{\|}\big{\|}\nabla m_{ \delta,h}\big{\|}_{L^{2}}\lesssim h^{-\delta\alpha-d/2}\). Thus, the Cauchy-Schwarz inequality and the Plancherel theorem imply
\[\mathrm{Var}(\xi_{l})\leqslant\mathbb{E}[|\xi_{l}|^{2}]\] \[\lesssim T^{-2}\sum_{k=1}^{d}\int|y|^{3}\big{|}\mathcal{F}^{-1} \big{[}m_{\delta,h}(u)\mathrm{e}^{-\mathrm{i}\langle u,x\rangle}\big{]}(-y) \big{|}^{2}|y_{k}|\,\mathbb{P}_{\delta}(\mathrm{d}y)\] \[\leqslant n^{-2}\delta^{-1}\sum_{k=1}^{d}\|z_{k}\nu\|_{\infty}\int |y|^{3}\big{|}\mathcal{F}^{-1}\big{[}m_{\delta,h}(u)\mathrm{e}^{-\mathrm{i} \langle u,x\rangle}\big{]}(y)\big{|}^{2}\,\mathrm{d}y\] \[\lesssim n^{-2}\delta^{-1}\big{\|}\big{\|}y|^{2}\mathcal{F}^{-1} \big{[}m_{\delta,h}(u)\mathrm{e}^{-\mathrm{i}\langle u,x\rangle}\big{]}(y) \big{\|}_{L^{2}}\big{\|}y|_{1}\mathcal{F}^{-1}\big{[}m_{\delta,h}(u)\mathrm{e} ^{-\mathrm{i}\langle u,x\rangle}\big{]}(y)\big{\|}_{L^{2}}\] \[\lesssim n^{-2}\delta^{-1}\Big{(}\sum_{k=1}^{d}\Big{\|}\frac{ \partial^{2}}{\partial u_{k}^{2}}\big{(}m_{\delta,h}(u)\mathrm{e}^{-\mathrm{ i}\langle u,x\rangle}\big{)}\Big{\|}_{L^{2}}\Big{)}\Big{(}\sum_{k=1}^{d}\Big{\|} \frac{\partial}{\partial u_{k}}\big{(}m_{\delta,h}(u)\mathrm{e}^{-\mathrm{i} \langle u,x\rangle}\big{)}\Big{\|}_{L^{2}}\Big{)}\] \[\lesssim n^{-2}\delta^{-1}h^{-2\delta\alpha-d}(1+|x|^{3}).\]
Now, Bernstein's inequality, e.g. van der Vaart (1998, Lemma 19.32) yields for a constant \(c^{\prime}>0\) and any \(\kappa>0\) that
\[\mathbb{P}\big{(}|M^{\nu}_{\delta,n}(x)|\geqslant\kappa\big{)}\leqslant 2\exp \Big{(}-\frac{Tc^{\prime}\kappa^{2}}{h^{-2\delta\alpha-d}(1+|x|^{3})+\kappa(1+|x| ^{2})h^{-\delta\alpha-d}}\Big{)},\]
which reads as the assertion if we choose \(\kappa=\kappa_{0}T^{-1/2}h^{-\delta\alpha-d/2}\) for any \(\kappa_{0}>0\) and set \(c=c^{\prime}/2\)
#### 4.4.4 Proof of Lemma 8
Fix \(x=(x^{(1)},x^{(2)})\) for \(x^{(1)},x^{(2)}\in\mathbb{R}^{d/2}\) and analogously split \(Y_{l}\) into its first and last \(d/2\) entries \(Y_{l}^{(1)}\) and \(Y_{l}^{(2)}\) with characteristic functions \(\varphi_{\delta,1}\) and \(\varphi_{\delta,2}\), respectively. Due to the product kernel, we obtain
\[\xi_{l}=T^{-1}|Y_{l}|^{2}\big{|}\mathcal{F}^{-1}\big{[}m_{\delta,h}(u)\mathrm{e }^{-\mathrm{i}(u,x)}\big{]}(-Y_{l})\big{|}=T^{-1}(A_{1}B_{2}+A_{2}B_{1})\]
with
\[A_{k}\coloneqq|Y_{l}^{(k)}|^{2}\mathcal{F}^{-1}[m_{\delta,h}^{ (k)}(u^{(k)})\mathrm{e}^{\mathrm{i}(u^{(k)},Y_{l}^{(k)})}](x^{(k)}),\qquad B_{ k}\coloneqq\mathcal{F}^{-1}[m_{\delta,h}^{(k)}(u^{(k)})\mathrm{e}^{\mathrm{i}(u^{(k)},Y_{l}^{(k)})}](x^{(k)}),\qquad\text{and}\] \[m_{\delta,h}^{(k)}(u^{(k)})\coloneqq\varphi_{\delta,k}^{-1}(u^{ (k)})\prod_{j=1+(k-1)d/2}^{(k+1)d/2}\mathcal{F}K^{j}(hu_{j}^{(k)}),\qquad k=1,2.\]
\(A_{1}\) and \(A_{2}\) are the same terms that appeared in the proof of Lemma 7 just with half the dimension and therefore
\[|A_{k}| \lesssim\|\varphi_{\delta,h}^{-1}\|_{L^{\infty}([-h^{-1},h^{-1}] ^{d/2}])}h^{-d/2}(1+|x^{(k)}|^{2}),\] \[\mathbb{E}[|A_{k}|^{2}] \lesssim\delta\|\varphi_{\delta,h}^{-1}\|_{L^{\infty}([-h^{-1},h^ {-1}]^{d/2}])}^{2}h^{-d/2}(1+|x^{(k)}|^{3}).\]
In a similar vain, \(m_{\delta,h}^{(1)}\) and \(m_{\delta,h}^{(2)}\) can be treated be treated like \(m_{\delta,h}\) with half the dimension leading to
\[|B_{k}|\lesssim\|m_{\delta,h}^{(k)}\|_{L^{1}(\mathbb{R}^{d/2})}\lesssim\| \varphi_{\delta,k}^{-1}\|_{L^{\infty}([-h^{-1},h^{-1}]^{d/2}])}h^{-d/2}.\]
Note that
\[\prod_{k=1}^{2}\|\varphi_{\delta,k}^{-1}\|_{L^{\infty}([-h^{-1},h^{-1}]^{d/2} ])}=\|\varphi_{\delta}^{-1}\|_{L^{\infty}(I_{h})}.\]
Together, we have the deterministic bound \(|\xi_{l}|\lesssim T^{-1}h^{-\delta\alpha-d}(1+|x|^{2})\). Further, since \(A_{1}\) and \(B_{2}\) as well as \(A_{2}\) and \(B_{1}\) are independent, we obtain
\[\mathrm{Var}(\xi_{l})\lesssim T^{-2}(\mathrm{Var}(A_{1}B_{2})+ \mathrm{Var}(A_{2}B_{1})) \leqslant T^{-2}\big{(}\mathbb{E}[|A_{1}|^{2}]\mathbb{E}[|B_{2}|^ {2}]+\mathbb{E}[|A_{2}|^{2}]\mathbb{E}[|B_{1}|^{2}]\big{)}\] \[\lesssim n^{-2}\delta^{-1}h^{-2\delta\alpha-3d/2}(1+|x|^{3}).\]
Overall, Bernstein's inequality gives for a constant \(c^{\prime}>0\) and any \(\kappa>0\) that
\[\mathbb{P}\big{(}|M_{\delta,n}^{\nu}(x)|\geqslant\kappa\big{)}\leqslant 2\exp \Big{(}-\frac{Tc^{\prime}\kappa^{2}}{h^{-2\delta\alpha-d}(1+|x|^{3})+\kappa(1+ |x|^{2})h^{-\delta\alpha-d}}\Big{)}.\]
The assertion follows by choosing \(\kappa=\kappa_{0}T^{-1/2}h^{-\delta\alpha-3d/4}\) for any \(\kappa_{0}>0\) and setting \(c=c^{\prime}/2\).
| |
2304.03540 | ChatPipe: Orchestrating Data Preparation Program by Optimizing
Human-ChatGPT Interactions | Orchestrating a high-quality data preparation program is essential for
successful machine learning (ML), but it is known to be time and effort
consuming. Despite the impressive capabilities of large language models like
ChatGPT in generating programs by interacting with users through natural
language prompts, there are still limitations. Specifically, a user must
provide specific prompts to iteratively guide ChatGPT in improving data
preparation programs, which requires a certain level of expertise in
programming, the dataset used and the ML task. Moreover, once a program has
been generated, it is non-trivial to revisit a previous version or make changes
to the program without starting the process over again. In this paper, we
present ChatPipe, a novel system designed to facilitate seamless interaction
between users and ChatGPT. ChatPipe provides users with effective
recommendation on next data preparation operations, and guides ChatGPT to
generate program for the operations. Also, ChatPipe enables users to easily
roll back to previous versions of the program, which facilitates more efficient
experimentation and testing. We have developed a web application for ChatPipe
and prepared several real-world ML tasks from Kaggle. These tasks can showcase
the capabilities of ChatPipe and enable VLDB attendees to easily experiment
with our novel features to rapidly orchestrate a high-quality data preparation
program. | Sibei Chen, Hanbing Liu, Weiting Jin, Xiangyu Sun, Xiaoyao Feng, Ju Fan, Xiaoyong Du, Nan Tang | 2023-04-07T08:33:08 | http://arxiv.org/abs/2304.03540v1 | # ChatPipe: Orchestrating Data Preparation Program by Optimizing Human-ChatGPT Interactions
###### Abstract.
Orchestrating a high-quality data preparation program is essential for successful machine learning (ML), but it is known to be time and effort consuming. Despite the impressive capabilities of large language models like ChatGPT in generating programs by interacting with users through natural language prompts, there are still limitations. Specifically, a user must provide specific prompts to iteratively guide ChatGPT in improving data preparation programs, which requires a certain level of expertise in programming, the dataset used and the ML task. Moreover, once a program has been generated, it is non-trivial to revisit a previous version or make changes to the program without starting the process over again.
In this paper, we present ChatPipe, a novel system designed to facilitate seamless interaction between users and ChatGPT. ChatPipe provides users with effective recommendation on next data preparation operations, and guides ChatGPT to generate program for the operations. Also, ChatPipe enables users to easily roll back to previous versions of the program, which facilitates more efficient experimentation and testing. We have developed a web application for ChatPipe and prepared several real-world ML tasks from Kaggle. These tasks can showcase the capabilities of ChatPipe and enable VLDB attendees to easily experiment with our novel features to rapidly orchestrate a high-quality data preparation program.
+
Footnote †: journal: Computer Vision and Pattern Recognition
(**Prompt-O**), and VarianceThreshold (**Prompt-O**), which are tailored for feature engineering. We can see that these suggested operations are quite reasonable and coherent, _i.e._, first generating interaction features from columns, then scaling the generated features, and finally removing low-variance features.
In such a way, ChatPipe can speed up the time-consuming "trial-and-error" process of data prep by guiding ChatGPT to generate effective data prep program via Human-ChatGPT interactions.
**Architecture of ChatPipe.** To support the aforementioned features, ChatPipe consists of three key modules. (1) _Operation Recommendation:_ Given a current program, ChatPipe recommends the most effective operations to improve the overall performance of the ML task. This module is powered by the techniques proposed in our research paper (Beng et al., 2017). (2) _Program Generation:_ Given the recommended operations, ChatPipe interacts with ChatGPT to generate an error-free data prep program that effectively implements the operations. (3) _Version Management:_ The Human-ChatGPT interactions would generate multiple program versions, including produced datasets and corresponding prompts. To enable users to roll back to any previous versions, ChatPipe visualizes the program versions, and develops techniques to speed-up program execution.
**Demonstration scenarios.** We build ChatPipe as a web application and demonstrate it by orchestrating data prep programs for real ML tasks from Kaggle, the most popular data science website. We prepare a collection of ML tasks, each of which contains a dataset, such as diabetes.csv in Figure 1. We allow the VLDB attendees to choose ML tasks, and then leverage ChatPipe to interact with ChatGPT for generating effective data prep programs. During the process, we illustrate the novel features of ChatPipe, including operation recommendation, program generation, and version management. A demonstration video can be found on YouTube1.
Footnote 1: [https://youtu.be/HwExA%ygY](https://youtu.be/HwExA%ygY)
To summarize, we make the following contributions. (1) We develop ChatPipe, a novel tool to optimize Human-ChatGPT interactions for data prep program orchestration. (2) ChatPipe can provide effective guidance to ChatGPT by suggesting operations to generate program, and enable users to jump over different program versions. (3) We deploy ChatPipe as a web app with user-friendly interface and demonstrate its utility on real data prep scenarios.
## 2. System Overview
The architecture of ChatPipe is shown in Figure 2. Given an ML task, such as training a classifier for diabetes prediction based on a dataset, ChatPipe is designed to optimize Human-ChatGPT interactions for orchestrating data prep programs. To achieve this goal, ChatPipe consists of three key modules. _Operation Recommendation_ provides users with effective recommendations on next data prep operations, such as dealing with missing values and feature engineering, based on the current program and dataset (Section 2.1). _Program Generation_ interacts with ChatGPT to generate an error-free data prep program that effectively implements the recommended operations (Section 2.2). _Version Management_ visualizes various program versions generated by the Human-ChatGPT interactions, and enables users to rapidly roll back to previous versions for more efficient experimentation and testing (Section 2.3).
### Operation Recommendation
The key challenge of _Operation Recommendation_ is two-fold. First, there are a variety of data prep operation types (_e.g._, _Scaling_ and _Discretization_), and even for one type, there are many possible operations (_e.g._MinMaxScaler and StandardScaler). Second, there could be many possible ways of recommending operations, which are highly dataset-specific and task-specific. To address the problem, we introduce a _reinforcement learning (RL)_ based approach that learns how to recommend operations through offline training. For inference, given a dataset and the current program, we use the trained model to suggest the most appropriate operations. More details of our approach can be found in our research paper (Beng et al., 2017).
**Reinforcement Learning Framework.** We formulate the operation recommendation problem as a sequential decision process by an _agent_. Based on a _policy_, the agent takes the current program \(P^{(i)}\) and dataset \(D^{(i)}\) as _state_, and performs an _action_, _i.e._, selecting an operation from a pre-defined search space. Then, the agent obtains _rewards_ from an _environment_, and updates its policy accordingly.
_State:_ We represent the state based on the current program \(P^{(i)}\) and the produced dataset \(D^{(i)}\). Specifically, we use statistical information as dataset features, and leverage a public pre-trained code representation model UniXcoder (Chen et al., 2017) to extract program features. Then, we concatenate the features to represent the state.
_Action:_ We design action as choosing an operation \(o_{i}\) from a pre-defined operation set \(O\), which transforms program \(P^{(i)}\) to \(P^{(i+1)}\).
_Reward:_ The reward is used as a signal of whether the actions performed are beneficial. We execute program \(P^{(i+1)}\) to evaluate its performance, _e.g._, F1-score of the trained classifier.
**Deep Q-Network (DQN).** We adopt a Deep Q-Network (DQN) framework to optimize the _policy_ function, as shown in Figure 2. Specifically, following the typical strategy in DQN, we adopt a neural network to approximate the value function in DQN, which takes state as input and produces a value for each action (_i.e._, operation). The neural network contains multiple fully connected layers with LeakeyRelu as activation function, and an output tanh layer.
Figure 2. Architecture of ChatPipe, which consists of three key modules, (a) Operation Recommendation, (b) Program Generation, and (c) Version Management.
**DQN Training and Inference.** Figure 2 illustrates the processes of DQN training and inference. Our DQN training algorithm uses an off-policy strategy that learns an \(\epsilon\)-greedy policy in multiple episodes. Specifically, in the \(i\)-th episode, it first computes current state \(s_{i}\), and then selects the greedy action that maximizes the value function with a probability \(1-\epsilon\), and a random action with probability \(\epsilon\). Given the selected action \(a_{i}\) (_i.e._, operation), the algorithm updates the program from \(P^{(i)}\) to \(P^{(i+1)}\), and executes \(P^{(i+1)}\) to obtain reward \(r_{i}\). Then, the parameters of DQN can be updated using stochastic gradient descent.
For inference, given a new program \(P^{(t)}\) and dataset \(D^{(t)}\), we extract their features to represent the state, and then utilize the trained model to compute values for the operations in our candidate set. Finally, we maintain the operations as a ranked list, and feed them to the _Program Generation_ module introduced below.
### Program Generation
_Program Generation_ interacts with ChatGPT to generate data prep programs through natural language _prompts_. The challenge here is how to guide ChatGPT to generate an error-free data prep program that effectively implements the operations. To address the challenge, we develop three components, _Operation Augmentation_, _Program Refinement_, and _Program Checking_, as described below.
**Operation Augmentation.** A straightforward method is to directly "translate" the best operation returned by _Operation Recommendation_. For example, for the StandardScaler operation, one prompt could be "_Standardize features by removing the mean and scaling to unit variance_". However, as the operation is from a constrained set of candidate operations (see Section 2.1), in some cases, the improvement in the operation may not be significant.
Fortunately, we have an interesting observation that ChatGPT may be helpful in these cases. Specifically, when we guide ChatGPT with a _coarse-grained_ prompt, _e.g._, "_Generate interaction features_", it may generate a customized program, such as df['AB'] = df['A'] * df['B'], instead of simply using predefined operations. This inspires us to organize operations into a two-level hierarchy, where the bottom layer is fine-grained operations in our pre-defined operations set (_e.g._, sklearn.preprocessing.MinMaxScaler), and the upper layer is the type of operators (_e.g._, Scaler). Based on the two-level hierarchy, we generate the prompts as follows. When the _confidence_ of the recommended operations is high, _e.g._, the output score of DQN is greater than a threshold, we select the fine-grained operations directly. Otherwise, we select the best operation type, which has the highest average score of its fine-grained operations, to generate a coarse-grained prompt.
**Program Refinement.** This component asks the users to refine the generated program by injecting their domain knowledge or data prep experiences. For example, a user can utilize domain knowledge to determine how to perform discretization on BMI, _e.g._, "The BMI should be cut into \([0,18.5,25,30,100]\) with ['underweight', 'normal', 'overweight', 'obese']. Moreover, the users can also customize parameters of the recommended operations, either by refining the prompts or by updating the program.
**Program Checking.** This component checks whether the generated program has run-time errors, and fixes the errors if there is any. To this end, before sending the program to users, ChatPipe first runs the generated program in a local run-time environment. If some errors occur, _e.g._, "_NameError: name 'PolynomialFeatures' is not defined_.", ChatPipe sends the error log to ChatGPT, and ChatGPT can fix the error to provide an updated program. Consider the above example again. ChatGPT will add one line in the head of program: "_from sklearn.preprocessing import PolynomialFeatures_". It iteratively runs this process until the program is error-free.
### Version Management
Human-ChatGPT interactions would generate multiple versions of the program, including the produced datasets and the corresponding prompts. To enable users to roll back to any previous versions, _Version Management_ is designed to visualize the program versions and support fast version switching. This module has two components: (1) Program Versioning, which maintains versions between programs, and (2) Data Caching, which accelerates the execution by caching intermediate data variables across versions.
**Program Versioning.** We use a relational database to store version-related information for each ML task, where each version contains the corresponding program, executing results, and the prompts. We also maintain the relationship between versions, which can be used to visualize the versions as a tree structure (see Figure 2). Moreover, users can add new versions by interacting with ChatGPT, delete useless versions, and can choose any two versions to compare theirs differences in program, data and prompts.
**Data Caching across Versions.** There could be slight differences among program versions, as the versions may have a large proportion of code-blocks in common. Thus, it is natural to cache some intermediate variables, which are likely to be reused across versions, so as to enable fast version switching.
We have studied two technical challenges in data caching across versions. Firstly, some studies have shown that it is unrealistic to store all the intermediate variables (Brock et al., 2018). Thus, it is necessary to select the most "beneficial" variables for caching. To address the challenge, we formulate the caching decision as a binary classification problem, which, given a data variable, estimates the probability that the variable is beneficial to cache. We devise a deep neural network to predict the caching probability, taking as features the variable's storage size, calculation time, previous operations and the accuracy of the downstream ML task. The second challenge is that we should determine when a cached variable can be reused. For example, if the code-block that computes the variable is changed, the cached data is then outdated, and thus cannot be reused. To address this, we use Abstract Syntax Tree (AST) to analyze the trace that each variable is computed, and use a optimized maximum flow algorithm (Brock et al., 2018) to judge whether a variable should be reused or recalculated,
## 3. Demonstration Scenarios
For our upcoming demonstration, we have curated a selection of 62 machine learning tasks from over 10 diverse domains, including education, medicine and finance, sourced from Kaggle. Attendees at VLDB are welcome to choose a dataset or domain that they are most comfortable working with.
Next, we will provide a concrete example of how to use ChatPipe, highlighting two key features. Firstly, ChatPipe can assist users in their interaction with ChatGPT by suggesting the next operations.
Secondly, ChatPipe can manage user history, allowing them to roll back to a previous version and compare the differences.
**Machine learning tasks.** Our sample machine learning task is to predict whether a patient has diabetes [2]. It contains 769 tuples and eight numeric diagnostic measurements as features. We use sklearn.model_selection.train_test_split to split the train set and the test set, with random_state = 0 by default.
**Setup.** The user uploads the diabetes dataset to ChatPipe and specifies the column name or index to be used for classification. She can then generate the initial version of the code by either writing it by hand in the Code Box or uploading an existing code. Alternatively, she can use ChatPipe to generate the code for them. Once the code is generated, it can be executed and saved. Subsequently, the user iteratively interacts with ChatPipe to improve the F1 score for the classification task.
Figure 3 illustrates an example where the user has interacted with ChatPipe four times and is about to initiate their fifth interaction, as described as follows.
_(1) Next Operation Recommendation._ In Figure 3, the user confirms the operation (or prompt) recommended by ChatPipe, _"Remove the outlier value 0 in columns Glucose \(\cdots\)"_, which will be sent to ChatGPT to update the code, as shown in the Code Box of Figure 3. The light-blue section highlights the added outlier handling operation.
Afterwards, the user executes the new code and observes an improvement in F1 (0.674 vs. 0.614). Then, the user enters the "/" symbol, which triggers ChatPipe to display several recommended operations (_e.g._, text bubbles above the input box). The operations are sorted and displayed by ChatPipe. After selecting an operation (in this case, "Discretization"), the corresponding prompt ("Discretize the features") is automatically filled into the input box.
The user can also apply domain knowledge and add an additional prompt, such as _"The BMI should be cut into \([0,18.5,25,30,100]\) with ['underweight', 'normal', 'overweight', 'obese']_.
_(2) Program Version Management._ ChatPipe stores all updated and executed programs, which can be viewed in the code editor. And the executed results (_e.g._, F1 score) of different programs can be viewed as a statistical line chart. ChatPipe manages these information as versions, which are organized into a version tree. Users can change the current version by clicking on the relevant node. In Figure 3, the user starts with the initialization version of the code root, generated automatically by ChatPipe. She then adds a scaling operation via a prompt, followed by a feature augmentation operation. During the third interaction, she decides to explore different options and clicks on the root node to return to the original version. Then, she selects the outlier handling operation further exploration.
| ```
高品質なデータ準備プログラムの構築は、機械学習の成功に不可欠ですが、時間と労力が必要となることは知られています。大規模言語モデルであるChatGPTは、ユーザーとの自然言語プロンプトでプログラムを作成する能力が素晴らしいですが、まだ限界があります。特に、ユーザーはChatGPTを改善するデータ準備プログラムに導くために、特定のプロンプトを提示する必要があります。これにはプログラミング、使用されているデータセット、および機械学習のタスクに関する一定の専門知識が必要です。さらに、プログラムが生成されると、過去のバージョンを再検討したり、変更を加えることは容易ではありません。この論文では、ChatPipeという革新的なシステムを提案します。ChatPipeは、ユーザーとChatGPTとの seamlessなインタラクションを促進するシステムです。ChatPipeは、ユーザーに次のデータ準備操作に対する効果的な提案を提供し、ChatGPTに操作用のプログラムを作成するように導きます。さらに、 |
2308.12798 | A preplanned multi-stage platform trial for discovering multiple
superior treatments with control of FWER and power | There is a growing interest in the implementation of platform trials, which
provide the flexibility to incorporate new treatment arms during the trial and
the ability to halt treatments early based on lack of benefit or observed
superiority. In such trials, it can be important to ensure that error rates are
controlled. This paper introduces a multi-stage design that enables the
addition of new treatment arms, at any point, in a pre-planned manner within a
platform trial, while still maintaining control over the family-wise error
rate. This paper focuses on finding the required sample size to achieve a
desired level of statistical power when treatments are continued to be tested
even after a superior treatment has already been found. This may be of interest
if there are other sponsors treatments which are also superior to the current
control or multiple doses being tested. The calculations to determine the
expected sample size is given. A motivating trial is presented in which the
sample size of different configurations is studied. Additionally the approach
is compared to running multiple separate trials and it is shown that in many
scenarios if family wise error rate control is needed there may not be benefit
in using a platform trial when comparing the sample size of the trial. | Peter Greenstreet, Thomas Jaki, Alun Bedding, Pavel Mozgunov | 2023-08-24T13:57:06 | http://arxiv.org/abs/2308.12798v1 | A preplanned multi-stage platform trial for discovering multiple superior treatments with control of FWER and power
###### Abstract
There is a growing interest in the implementation of platform trials, which provide the flexibility to incorporate new treatment arms during the trial and the ability to halt treatments early based on lack of benefit or observed superiority. In such trials, it can be important to ensure that error rates are controlled. This paper introduces a multi-stage design that enables the addition of new treatment arms, at any point, in a pre-planned manner within a platform trial, while still maintaining control over the family-wise error rate. This paper focuses on finding the required sample size to achieve a desired level of statistical power when treatments are continued to be tested even after a superior treatment has already been found. This may be of interest if there are other sponsors treatments which are also superior to the current control or multiple doses being tested. The calculations to determine the expected sample size is given. A motivating trial is presented in which the sample size of different configurations is studied. Additionally the approach is compared to running multiple separate trials and it is shown that in many scenarios if family wise error rate control is needed there may not be benefit in using a platform trial when comparing the sample size of the trial.
## 1 Introduction
Platform trials are a type of trial design which can aim to reduce the amount of time and cost of clinical trials and in recent years, there has been an increase in the utilization of such trials, including during the COVID-19 pandemic (Stallard et al., 2020; Lee et al., 2021). Clinical trials take many years to run and can cost billions of dollars (Mullard, 2018). During this time it is not uncommon for new promising treatments to emerge and become ready to join the current phase later (Choodari-Oskooei et al., 2020). Therefore it may be advantageous to include these treatments into an ongoing trial. This can have multiple potential benefits including: shared trial infrastructure; the possibility to use a shared control group; less administrative
and logistical effort than setting up separate trials and enhance the recruitment (Burnett et al., 2020; Meurer et al., 2012). This results in useful therapies potentially being identified faster while reducing cost and time (Cohen et al., 2015).
There is an ongoing discussion about how to add new treatments to clinical trials (Cohen et al., 2015; Lee et al., 2021) in both a pre-planned and in an unplanned manor (Greenstreet et al., 2021; Burnett et al., 2020). In both Bennett and Mander (2020); Choodari-Oskooei et al. (2020) approaches are proposed which extend the Dunnett test (Dunnett, 1955) to allow for unplanned additional arms to be included into multi-arm trials while still controlling the family wise error rate (FWER). This methodology does not incorporate the possibility of interim analyses.
FWER is often considered to be one of the strongest types of type I error control in a multi-arm trial (Wason et al., 2016). There are other approaches one may wish to consider such as pairwise error rate (PWER) and the false discovery rate (FDR) (Robertson et al., 2022; Cui et al., 2023; Bratton et al., 2016; Choodari-Oskooei et al., 2020). However as discussed in Wason et al. (2014) there are scenarios where FWER is seen as the recommended error control, and it can be a regulatory requirement.
One may wish to include interim analyses as they allow for ineffective treatments to be dropped for futility earlier and allow treatments to stop early if they are found superior to the control. Therefore potentially improving the efficiency of design of a clinical trial by decreasing the expected sample sizes and costs of a trial (Pocock, 1977; Todd et al., 2001; Wason et al., 2016). Multi-arm multi-stage (MAMS) designs (Magir et al., 2012; Royston et al., 2003) allow interim analyses while still allowing several treatments to be evaluated within one study, but do not allow for additional arms to be added throughout the trial. Burnett et al. (2020) have developed an approach that builds on Hommel (2001) to incorporate unplanned additional treatment arms to be added to a trial already in progress using the conditional error principle (Proschan and Hunsberger, 1995). This allows for modifications during the course of a trial. However due to the unplanned nature of the adaptation, later treatments can be greatly underpowered compared to arms which begin the trial.
In a recent paper Greenstreet et al. (2021) proposed a preplanned approach to adding additional arms in which interim analyses can be conducted and multiple arms can be evaluated with some arms being added at later time points. In this work the trial was powered assuming that only one treatment may be taken forward. However as discussed in the work by Urach and Posch (2016); Serra et al. (2022) this may not always be the case. For example one may be interested in lower doses; or multiple treatments from different sponsors; or interested if another treatment has preferable secondary outcomes if it also meets the primary outcome. Furthermore in Greenstreet et al. (2021) treatment arms can only be added when a interim analysis happens, this can greatly restrict when arms can join the trial resulting in potentially large time periods that a new treatment is available before an interim is conducted so able to join the trial.
In this work, we provide an analytical method for adding of treatments at any point to a multi-arm multi-stage trial in a pre-planned manner, while still controlling the statistical errors. This work will focus on trials in which one is interested in continuing to investigate the other treatments even after a superior treatment has been found. In addition in this work multiple types of power will be considered, and will prove that the conjunctive power of the study is
at its lowest for a given sample size when all the active treatments have a clinically relevant effect, where the conjunctive power is the probability of finding all the active treatments with a clinically relevant effect. The methodology discussed here can be used to create multiple designs for each point the additional treatments may be added into the trial. This is due to the model flexibility, as the additional arms do not need to be added when a interim happens, resulting in new active arms being able to be added faster into the platform trial.
This work will focus predominantly on the case where one has equal allocation ratio across all the active treatments and the same number of interim analyses per treatment with the same boundary shape. This is to help mitigate issues with time trends (Altman and Royston, 1988; Getz and Campo, 2017) when changing allocation ratio mid trial (Proschan and Evans, 2020; Roig et al., 2023). However the proposed methodology is general and therefore can be implemented for when there is not equal allocation ratio across all the active treatments, however one needs to be cautious of potential time trend effects.
We begin this work by analytically calculating the FWER and power of the study and use these to calculate both the stopping boundaries and sample size. Then in Section 2.4 the equations for sample size distribution and expected sample size are given. A trial example of FLAIR (Howard et al., 2021), in Section 3, is used to motivate a hypothetical trial of interest. The sample size and stopping boundaries are found for multiple types of power control and the effect of different treatment effects is studied. Then the trial designs are then compared to running multiple separate trials. Finally in Section 4 there is a discussion of the paper and this introduces areas for further research.
## 2 Methodology
### Setting
In the clinical trial design considered in this work K experimental arms effectiveness is compared to a common control arm. The trial has \(K^{\star}\) treatments starting at the beginning of the trial, and the remaining \(K-K^{\star}\) treatments being added at later points into the platform. The primary outcome measured for each patient is assumed to be independent, continuous, and follows a normal distribution with a known variance (\(\sigma^{2}\)).
The points at which each active treatment arm is added are predetermined, but can be set to any point within the trial. Each of the \(K\) treatments is potentially tested at a series of analyses indexed by \(j=1,\ldots,J_{k}\) where \(J_{k}\) is the maximum number of analyses for a given treatment \(k=1,\ldots,K\). Let \(n(k)\) denote the number of patients recruited to the control treatment before treatment \(k\) is added to the platform trial and define the vector of adding times by \(\mathbf{n}(\mathbf{K})=(n(1),\ldots,n(K))\). Therefore for treatments that start at the beginning of the trial \(n(k)=0\). We also denote \(n_{k,j}\) as the number of patients recruited to treatment \(k\) by the end of it's \(j^{\text{th}}\) stage and define \(n_{0,k,j}\) as the total number of patients recruited to the control at the end of treatment \(k\)'s \(j^{\text{th}}\) stage. We define \(n_{k}=n_{k,1}\) as the number recruited to the first stage of treatment \(k\), \(k=1,\ldots,K\). Similarly we define \(r_{k,j}\) and \(r_{0,k,j}\) as the ratio of patients recruited treatment \(k\) and the control by treatment \(k\)'s \(j^{\text{th}}\) stage, respectively. Also \(r(k)\) denotes the ratio of patients recruited to the control before treatment \(k\) is added to the trial. For example if a trial was planned to have equal number of patients per stage and a treatment
(\(k^{\prime}\)) was added at the first interim then \(r(k^{\prime})=1\) and at the first stage for \(k^{\prime}\), \(r_{0,k^{\prime},1}=2\). The total sample size of a trial is denoted by \(N\). The maximum total planned sample size is \(\max(N)=\sum_{k=1}^{K}n_{k,J_{k}}+\max_{k\in 1,\ldots,K}(n_{0,k,J_{k}})\).
Throughout the trial, the control arm is recruited and maintained for the entire duration. The comparisons between the control arm and the active treatment arms are based on concurrent controls, meaning that only participants recruited to the control arm at the same time as the corresponding active arm are used in the comparisons. Work on the use of non-concurrent controls include Lee and Wason (2020); Marschner and Schou (2022).
The null hypotheses of interest are \(H_{01}:\mu_{1}\leq\mu_{0},H_{02}:\mu_{2}\leq\mu_{0},...,H_{0K}:\mu_{K}\leq\mu_ {0}\), where \(\mu_{1},\ldots,\mu_{K}\) are the mean responses on the \(K\) experimental treatments and \(\mu_{0}\) is the mean response of the control group. The global null hypothesis, \(\mu_{0}=\mu_{1}=\mu_{2}=\ldots=\mu_{K}\) is denoted by \(H_{G}\). At analysis \(j\) for treatment \(k\), to test \(H_{0k}\) it is assumed that responses, \(X_{k,i}\), from patients \(i=1,\ldots,n_{k,j}\) are observed, as well as the responses \(X_{0,i}\) from patients \(i=n(k)+1,\ldots,n_{0,k,j}\). These are the outcomes of the patients allocated to the control which have been recruited since treatment \(k\) has been added into the trial up to the \(j^{\text{th}}\) analysis of treatment \(k\). The null hypotheses are tested using the test statistics
\[Z_{k,j}=\frac{n_{k,j}^{-1}\sum_{i=1}^{n_{k,j}}X_{k,i}-(n_{0,k,j}-n(k))^{-1} \sum_{i=n(k)+1}^{n_{0,k,j}}X_{0,i}}{\sigma\sqrt{(n_{k,j})^{-1}+(n_{0,k,j}-n(k) )^{-1}}}.\]
The decision-making for the trial is made by the upper and lower stopping boundaries, denoted as \(U_{k}=(u_{k,1},\ldots,u_{k,J_{k}})\) and \(L_{k}=(l_{k,1},\ldots,l_{k,J_{k}})\). These boundaries are utilized to determine whether to continue or halt a treatment arm or even the whole trial at various stages. The decision-making process is as follows: if the test statistic for treatment \(k\) at stage \(j\) exceeds the upper boundary \(u_{k,j}\), the null hypothesis \(H_{0k}\) is rejected, and the treatment is stopped with the conclusion that it is superior to the control. Conversely, if \(Z_{k,j}\) falls below the lower boundary \(l_{k,j}\), treatment k is stopped for futility for all subsequent stages of the trial. If neither the superiority nor futility conditions are met, \(l_{k,j}\leq Z_{k,j}\leq u_{k,j}\), treatment \(k\) proceeds to it's next stage \(j+1\). If all the active treatments are stopped than the trial stops. These bounds are found to control the type I error of desire for the trial. In this work we consider the familywise error rate (FWER) as the type I error control of focus as discussed in Section 2.2.
### Familywise error rate (FWER)
The FWER in the strong sense is a way of defining the type I error of a trial with multiple hypotheses and is defined as
\[P(\text{reject at least one true }H_{0k}\text{ under any null }\text{ configuation},k=1,\ldots,K)\leq\alpha. \tag{2.1}\]
where \(\alpha\) is the desired level of control for the FWER. As proven in Greenstreet et al. (2021) which builds on Magirr et al. (2012), one can show that the FWER is controlled in the strong sense under the global null, as given in the Supporting Information Section 1. The FWER under the global null hypothesis is equal to
\[1-\sum_{\begin{subarray}{c}j_{k}=1\\ k=1,2,\ldots,K\end{subarray}}^{J_{k}}\Phi(\mathbf{L_{j_{k}}(0)},\mathbf{U_{j_{k}} (0)},\Sigma_{\mathbf{j_{k}}}). \tag{2.2}\]
Here \(\Phi(\cdot)\) denotes the multivariate standard normal distribution function, and \(\mathbf{j_{k}}=(j_{1},\ldots,j_{K})\). With \(\mathbf{j_{k}}\) one can define the vector of upper and lower limits from the multivariate standard normal distribution function as \(\mathbf{U_{j_{k}}(0)}=(U_{1,j_{1}}(0),\ldots,U_{K,j_{K}}(0))\) and \(\mathbf{L_{j_{k}}(0)}=(L_{1,j_{1}}(0),\ldots,L_{K,j_{K}}(0))\) where \(U_{k,j_{k}}(0)=(u_{k,1},\ldots,l_{k,j_{k}})\) and \(L_{k,j_{k}}(0)=(l_{k,1},\ldots,-\infty)\) respectively. \(U_{k,j_{k}}(0)\) and \(L_{k,j_{k}}(0)\) represent the upper and lower limits for treatment \(k\) given \(j_{k}\) for the multivariate standard normal distribution function. The correlation matrix \(\Sigma_{\mathbf{j_{k}}}\) complete correlation structure is
\[\Sigma_{\mathbf{j_{k}}}=\begin{pmatrix}\rho_{(1,1),(1,1)}&\rho_{(1,1),(1,2)}& \ldots&\rho_{(1,1),(1,j_{1})}&\rho_{(1,1),(2,1)}&\ldots&\rho_{(1,1),(K,j_{k})} \\ \rho_{(1,2),(1,1)}&\rho_{(1,2),(1,2)}&\ldots&\rho_{(1,2),(1,j_{1})}&\rho_{(1, 2),(2,1)}&\ldots&\rho_{(1,2),(K,j_{k})}\\ \vdots&\vdots&\ddots&\vdots&\vdots&\ddots&\vdots\\ \rho_{(1,j_{1}),(1,1)}&\rho_{(1,j_{1}),(1,2)}&\ldots&\rho_{(1,j_{1}),(1,j_{1} )}&\rho_{(1,j_{1}),(2,1)}&\ldots&\rho_{(1,j_{1}),(K,j_{k})}\\ \rho_{(2,1),(1,1)}&\rho_{(2,1),(1,2)}&\ldots&\rho_{(2,1),(1,j_{1})}&\rho_{(2, 1),(2,1)}&\ldots&\rho_{(2,1),(K,j_{k})}\\ \vdots&\vdots&\ddots&\vdots&\vdots&\ddots&\vdots\\ \rho_{(K,j_{k}),(1,1)}&\rho_{(K,j_{k}),(1,2)}&\ldots&\rho_{(K,j_{k}),(1,j_{1} )}&\rho_{(K,j_{k}),(2,1)}&\ldots&\rho_{(K,j_{k}),(K,j_{k})}\end{pmatrix}. \tag{2.3}\]
where \(\rho_{(k,j),(k^{\star},j^{\star})}\) equals one of the following: If \(k=k^{\star}\) and \(j=j^{\star}\) then \(\rho_{(k,j),(k^{\star},j^{\star})}=1\); If \(k=k^{\star}\) and \(j<j^{\star}\) then
\[\rho_{(k,j),(k^{\star},j^{\star})}=\left(\sqrt{r_{k,j}^{-1}+(r_{ 0,k,j}-r(k))^{-1}}\sqrt{r_{k,j^{\star}}^{-1}+(r_{0,k,j^{\star}}-r(k))^{-1}} \right)^{-1}\] \[\left(\frac{1}{r_{k,j^{\star}}}+\frac{1}{r_{0,k,j^{\star}}-r(k)} \right);\]
and if \(k\neq k^{\star}\) where \(n(k)<n(k^{\star})\) then
\[\rho_{(k,j),(k^{\star},j^{\star})}=\max\Bigg{[}0, \Bigg{(}\sqrt{r_{k,j}^{-1}+(r_{0,k,j}-r(k))^{-1}}\sqrt{r_{k^{ \star},j^{\star}}^{-1}+(r_{0,k^{\star},j^{\star}}-r(k^{\star}))^{-1}}\Bigg{)}^ {-1}\] \[\left(\frac{\min[r_{0,k,j}-r(k^{\star}),r_{0,k^{\star},j^{\star}}- r(k^{\star}))}{[r_{0,k,j}-r(k)][r_{0,k^{\star},j^{\star}}-r(k^{\star})]}\right) \Bigg{]}.\]
As seen here if treatment \(k^{\star}\) is added to the platform trial after the \(j\) stage for treatment \(k\) then the correlation equals \(0\) as there is no shared controls. The proposed methodology allows for different critical boundaries to be used for each treatment \(k\) as shown in Equation (2.2).
If it is assumed that there is equal number of stages per treatment and equal allocation across all the active treatments then, as a result, if one is using the same stopping boundary shape one can simply just calculate the FWER. This is because it results in equal pairwise error rate (PWER) for each treatment (Bratton et al., 2016; Choodari-Oskooei et al., 2020; Greenstreet et al., 2021). This removes the potential issue of time trends with changing allocation ratios.
Therefore to find the boundaries one can use a single scalar parameter \(a\) with the functions \(L_{k}=f(a)\) and \(U_{k}=g(a)\) where \(f\) and \(g\) are the functions for the shape of the upper and lower boundaries respectively. This is similar to the method presented in Magirr et al. (2012).
### Power
When designing a multi-arm trial in which all treatments get tested until they are stopped for futility or superiority, regardless of the other treatments, different definitions of power could be considered. The power of a study is focused on the probability that the trial results in some or all of the treatments going forward. The sample size of the study is then found to ensure that the chosen power is greater than or equal to some chosen value \(1-\beta\).
One may be interested in ensuring that at least one treatment is taken forward from the study. This can be split into two types of power discussed in the literature. The first is the disjunctive power (Urach and Posch, 2016; Choodari-Oskooei et al., 2020; Hamasaki et al., 2021) which is the probability of taking at least one treatment forward. The second is the pairwise power which is the probability of taking forward a given treatment which has a clinically relevant effect (Choodari-Oskooei et al., 2020; Royston et al., 2011). In the Supporting Information Section 2 the equations needed to calculate the disjunctive power (\(P_{D}\)) are given.
Another way of thinking of powering a study is the probability of taking forward all the treatments which have a clinically relevant effect. This is known as the conjunctive power of a study (Urach and Posch, 2016; Choodari-Oskooei et al., 2020; Hamasaki et al., 2021; Serra et al., 2022). For the conjunctive power we prove that it is lowest when all the treatments have the clinically relevant effect.
#### 2.3.1 Pairwise power
The pairwise power of a treatment is independent of other active treatments. This is because the other active treatments effect has no influence on the treatment of interest as these are independent. Therefore we only need to consider the probability that the treatment of interest with a clinically relevant effect is found superior to the control. The pairwise power for treatment \(k\) (\(P_{pw,k}\)) with the clinically relevant effect \(\theta^{\prime}\) is:
\[P_{pw,k}=\sum_{j=1}^{J_{k}}\Phi(U_{k,j}^{+}(\theta^{\prime}),L_{k,j}^{+}( \theta^{\prime}),\ddot{\Sigma}_{k,j}), \tag{2.4}\]
with
\[L_{k,j}^{+}(\theta_{k})=(l_{k,1}-\frac{\theta_{k}}{\sqrt{I_{k,1} }},\ldots,l_{k,j-1}-\frac{\theta_{k}}{\sqrt{I_{k,j-1}}},u_{k,j}-\frac{\theta_{ k}}{\sqrt{I_{k,j-1}}}) \tag{2.5}\] \[U_{k,j}^{+}(\theta_{k})=(u_{k,1}-\frac{\theta_{k}}{\sqrt{I_{k,j- 1}}},\ldots,u_{k,j-1}-\frac{\theta_{k}}{\sqrt{I_{k,j-1}}},\infty). \tag{2.6}\]
and \(I_{k,j}=\sigma^{2}(n_{k,j}^{-1}+(n_{0,k,j}-n(k))^{-1})\). The \((i,i^{\star})^{\text{th}}\) element (\(i\leq i^{\star}\)) of the covariance matrix \(\ddot{\Sigma}_{k,j}\) is
\[\Bigg{(}\sqrt{r_{k,i}^{-1}+(r_{0,k,i}-r(k))^{-1}}\sqrt{r_{k,i^{\star}}^{-1}+(r_ {0,k,i^{\star}}-r(k))^{-1}}\Bigg{)}^{-1}\Bigg{(}\frac{1}{r_{k,i^{\star}}}+ \frac{1}{r_{0,k,i^{\star}}-r(k)}\Bigg{)}.\]
One can then design the trial so that the pairwise power for each treatment \(k\) (\(P_{pw,k}\)) is greater than or equal to some chosen \(1-\beta\) for every treatment. If one has an equal number of stages per treatment and equal allocation across all the active treatments with the same stopping boundaries, this ensures that pairwise power is equal for each treatment so \(n_{k}=n_{k^{\star}}\) for all \(k,k^{\star}\in 1,\ldots,K\). Therefore we define \(n=n_{k}\) for all \(k\in 1,\ldots,K\). To ensure pairwise power is controlled, keep increasing \(n\) until \(P_{pw}\geq 1-\beta\) where \(P_{pw}=P_{pw,k}\) for all \(k\in 1,\ldots,K\).
If one designing a trial in which there is a set number of patients to the control before an active treatment \(k\) is added, so \(n(k)\) is predefined before calculating the boundaries and sample size, one needs to use an approach such as the Algorithm below. This is because when the sample size increases there is no increase in \(n(k)\) for all \(k\). This results in a change in the allocation ratio between \(r(k)\) and \(r_{0,k,j}\) for each \(j\). Therefore requiring the bounds to be recalculated for the given \(r(k)\). If one focus is on the new arms being added after a set percentage of the way through the trial this issue no longer persists, as the allocation ratio stays the same so the bounds can be calculated ones.
```
0 Begin by assuming \(\mathbf{n}(\mathbf{K})=(0,0,\ldots,0)\) and find the stopping boundaries to control the FWER. Now calculate \(n\) such that the pairwise power is greater then or equal to a pre-specified \((1-\beta)\). Then repeat the following iterative steps until the pairwise power, given the true \(\mathbf{n}(\mathbf{K})\), is greater than \((1-\beta)\):
1 Find the stopping boundaries to control the FWER for the true predefined \(\mathbf{n}(\mathbf{K})\) given the current \(n\).
2 Calculate \(P_{pw}\) for the given boundaries.
3 If \(P_{pw}\geq 1-\beta\) then stop, else increase \(n\) by 1 and repeat steps 1-3.
```
**Algorithm 1** Iterative approach to compute the \(n\) for the pairwise power with predefined \(\mathbf{n}(\mathbf{K})\)
#### 2.3.2 Conjunctive power
The conjunctive power is defined as the probability of taking forward all the treatments which have a clinically relevant effect. We begin by proving when the conjunctive power is at its lowest. We define the events
\[B_{k,j}(\theta_{k})= [l_{k,j}+(\mu_{k}-\mu_{0}-\theta_{k})I_{k,j}^{1/2}<Z_{k,j}<u_{k,j}+ (\mu_{k}-\mu_{0}-\theta_{k})I_{k,j}^{1/2}],\] \[C_{k,j}(\theta_{k})= [Z_{k,j}>u_{k,j}+(\mu_{k}-\mu_{0}-\theta_{k})I_{k,j}^{1/2}],\]
where \(B_{k,j}(\theta_{k})\) defines the event that treatment \(k\) continues to the next stage and \(C_{k,j}(\theta_{k})\) defines the event that treatment \(k\) is found superior to the control at stage \(j\). If \(\mu_{k}-\mu_{0}=\theta_{k}\) for \(k=1,\ldots,K\), the event that \(H_{01},\ldots,H_{0K}\) are all rejected (\(\bar{W}_{K}(\Theta)\)) is equivalent to
\[\bar{W}_{K}(\Theta)=\bigcap_{k\in\{m_{1},\ldots,m_{K}\}}\Bigg{(}\bigcup_{j=1}^{ J_{k}}\Bigg{[}\Bigg{(}\bigcap_{i=1}^{j-1}B_{k,i}(\theta_{k})\Bigg{)}\cap C_{k,j}( \theta_{k})\Bigg{]}\Bigg{)},\]
where \(\Theta=\{\theta_{1},\theta_{2},\ldots,\theta_{K}\}\).
**Theorem 2.1**.: _For any \(\Theta\), \(P(\text{reject all }H_{0k}\) for which \(\theta_{k}\geq\theta^{\prime}|\Theta)\geq P(\text{reject all }H_{0k}\) for which \(\theta_{k}\geq\theta^{\prime}|(\mu_{1}=\mu_{2}=\ldots=\mu_{K}=\mu_{0}+\theta^{ \prime}))\)._
Proof.: For any \(\epsilon_{k}<0\),
\[\bigcup_{j=1}^{J_{k}}\Bigg{[}\bigg{(}\bigcap_{i=1}^{j-1}B_{k,i}(\theta_{k}+ \epsilon_{k})\bigg{)}\cap C_{k,j}(\theta_{k}+\epsilon_{k})\Bigg{]}\subseteq \bigcup_{j=1}^{J_{k}}\Bigg{[}\bigg{(}\bigcap_{i=1}^{j-1}B_{k,i}(\theta_{k}) \bigg{)}\cap C_{k,j}(\theta_{k})\Bigg{]}.\]
Take any
\[w=(Z_{k,1},\ldots,Z_{k,J})\in\bigcup_{j=1}^{J_{k}}\Bigg{[}\bigg{(}\bigcap_{i= 1}^{j-1}B_{k,i}(\theta_{k}+\epsilon_{k})\bigg{)}\cap C_{k,j}(\theta_{k}+ \epsilon_{k})\Bigg{]}.\]
For some \(q\in\{1,\ldots,J_{k}\}\), for which \(Z_{k,q}\in C_{k,q}(\theta_{k}+\epsilon_{k})\) and \(Z_{k,j}\in B_{k,j}(\theta_{k}+\epsilon_{k})\) for \(j=1,\ldots,q-1\). \(Z_{k,q}\in C_{k,q}(\theta_{k}+\epsilon_{k})\) implies that \(Z_{k,q}\in C_{k,q}(\theta_{k})\). Furthermore \(Z_{k,q}\in B_{k,q}(\theta_{k}+\epsilon_{k})\) implies that \(Z_{k,q}\in B_{k,q}(\theta_{k})\cup C_{k,q}(\theta_{k})\) for some \(j=1,\ldots,q-1\). Therefore,
\[w\in\bigcup_{j=1}^{J_{k}}\Bigg{[}\bigg{(}\bigcap_{i=1}^{j-1}B_{k,i}(\theta_{k} )\bigg{)}\cap C_{k,j}(\theta_{k})\Bigg{]}.\]
Next suppose for any \(m_{1},\ldots,m_{K}\) where \(m_{1}\in\{1,\ldots,K\}\) and \(m_{k}\in\{1,\ldots,K\}\backslash\{m_{1},\ldots,m_{k-1}\}\) with \(\theta_{m_{1}},\ldots,\theta_{m_{l}}\geq\theta^{\prime}\) and \(\theta_{m_{l+1}},\ldots,\theta_{m_{K}}<\theta^{\prime}\). Let \(\Theta_{l}=(\theta_{m_{1}},\ldots,\theta_{m_{l}})\). Then
\[P(\text{reject all }H_{0k}\text{ for which }\theta_{k}\geq\theta^{ \prime}|\Theta) =P(\bar{W}_{l}(\Theta_{l}))\] \[\geq P(\bar{W}_{l}(\Theta^{\prime}))\] \[\geq P(\bar{W}_{k}(\Theta^{\prime}))\] \[=P(\text{reject all }H_{0k}\text{ for which }\theta_{k}\geq\theta^{ \prime}|H_{PG}).\]
where \(\Theta^{\prime}=(\theta^{\prime},\ldots,\theta^{\prime})\).
It follows from Theorem 2.1 that the conjunctive power (\(P_{C}\)) is minimised when all treatments have the smallest interesting treatment effect. In order to ensure the conjunctive power is greater than level \(1-\beta\) we rearrange the events \(B_{k,j}(\theta_{k})\) and \(C_{k,i}(\theta_{k})\) to find
\[P_{C}=P(\bar{W}_{l}(\Theta^{\prime}))=\sum_{\begin{subarray}{c}j_{k}=1\\ k=1,2,\ldots,K\end{subarray}}^{J_{k}}\Phi(\mathbf{L}_{\mathbf{jk}}^{+}(\Theta ^{\prime}),\mathbf{U}_{\mathbf{jk}}^{+}(\Theta^{\prime}),\Sigma_{\mathbf{jk}}), \tag{2.7}\]
where \(\mathbf{U}_{\mathbf{jk}}^{+}(\Theta^{\prime})=(U_{1,j_{1}}^{+}(\theta^{\prime} ),\ldots,U_{K,j_{K}}^{+}(\theta^{\prime}))\) and \(\mathbf{L}_{\mathbf{jk}}^{+}(\Theta^{\prime})=(L_{1,j_{1}}^{+}(\theta^{\prime} ),\ldots,L_{K,j_{K}}^{+}(\theta^{\prime}))\) with \(U_{k,j_{k}}^{+}(\theta_{k})\) and \(L_{k,j_{k}}^{+}(\theta_{k})\) defined in Equation (2.6) and Equation (2.5), respectively. The correlation matrix \(\Sigma_{\mathbf{jk}}\) is the same as that given for FWER in Equation (2.3).
When one has equal number of stages and equal allocation to find the sample size one needs to increase \(n\) until \(P_{C}=1-\beta\). If one is in the case of fixed \(\mathbf{n}(\mathbf{k})\) then one can use Algorithm 1, now replacing pairwise power for conjunctive power.
### Sample size distribution and Expected sample size
The determination of sample size distribution and expected sample size involves calculating the probability for each outcome of the trial, denoted as \(Q_{\mathbf{j_{k}},\mathbf{q_{k}}}\). Here, \(\mathbf{q_{k}}=(q_{1},\ldots,q_{K})\) is defined, where \(q_{k}=0\) indicates that treatment \(k\) falls below the lower stopping boundary at point \(j_{k}\), and \(q_{k}=1\) indicates that treatment \(k\) exceeds the upper stopping boundary at point \(j_{k}\). We find
\[Q_{\mathbf{j_{k}},\mathbf{q_{k}}}= \Phi(\tilde{\mathbf{L}}_{\mathbf{j_{k}},\mathbf{q_{k}}}(\Theta), \tilde{\mathbf{U}}_{\mathbf{j_{k}},\mathbf{q_{k}}}(\Theta),\Sigma_{\mathbf{j_ {k}}}),\]
with \(\mathbf{j_{k}}\) one can define \(\tilde{\mathbf{L}}_{\mathbf{j_{k}},\mathbf{q_{k}}}(\Theta)=(\tilde{L}_{1,j_{1 },q_{1}}(\theta_{1}),\ldots,\tilde{L}_{K,j_{K},q_{K}}(\theta_{K}))\) and \(\tilde{\mathbf{U}}(\Theta)_{\mathbf{j_{k}},\mathbf{q_{k}}}=(\tilde{U}_{1,j_{1 },q_{1}}(\theta_{1}),\)\(\ldots,\tilde{U}_{K,j_{K},q_{K}}(\theta_{K}))\) where
\[\tilde{L}_{k,j,q_{k}}(\theta_{k})= (l_{k,1}-\frac{\theta_{k}}{\sqrt{I_{k,1}}},\ldots,l_{k,j-1}- \frac{\theta_{k}}{\sqrt{I_{k,j-1}}},[\mathbb{1}\left(q_{k}=0\right)(-\infty)+ u_{k,j_{k}}]-\frac{\theta_{k}}{\sqrt{I_{k,j}}}),\] \[\tilde{U}_{k,j,q_{k}}(\theta_{k})= (u_{k,1}-\frac{\theta_{k}}{\sqrt{I_{k,1}}},\ldots,u_{k,j-1}- \frac{\theta_{k}}{\sqrt{I_{k,j-1}}},[\mathbb{1}\left(q_{k}=1\right)(\infty)+l_ {k,j_{k}}]-\frac{\theta_{k}}{\sqrt{I_{k,j}}}),\]
respectively. The \(P_{\mathbf{j_{k}},\mathbf{q_{k}}}\) are associated with their given total sample size \(N_{\mathbf{j_{k}},\mathbf{q_{k}}}\) for that given \(\mathbf{j_{k}}\) and \(\mathbf{q_{k}}\).
\[N_{\mathbf{j_{k}},\mathbf{q_{k}}}= \bigg{(}\sum_{k=1}^{K}n_{k,j_{k}}\bigg{)}+\max_{k\in 1,\ldots K}(n_{0,k,j_{k}}),\]
This shows that the control treatment continues being recruited to until, at the earliest, the last active treatment to be added has had at least one analysis. To obtain the sample size distribution, as similarly done in Greenstreet et al. (2021), we group all the values of \(\mathbf{j_{k}}\) and \(\mathbf{q_{k}}\) that gives the same value of \(N_{\mathbf{j_{k}},\mathbf{q_{k}}}\) with its corresponding \(Q_{\mathbf{j_{k}}.\mathbf{q_{k}}}\). This set of \(Q_{\mathbf{j_{k}}.\mathbf{q_{k}}}\) is then summed together to give the probability of the realisation of this sample size. To calculate the sample size distribution for each active arm, group \(n_{k,j_{k}}\) with its corresponding \(Q_{\mathbf{j_{k}},\mathbf{q_{k}}}\) and this can similarly be done for the control treatment. The expected sample size for a given \(\Theta\), denoted as \(E(N|\Theta)\), is obtained by summing all possible combinations of \(\mathbf{j_{k}}\) and \(\mathbf{q_{k}}\),
\[E(N|\Theta)=\sum_{\begin{subarray}{c}j_{k}=1\\ k=1,2,\ldots,K\end{subarray}}^{J_{k}}\sum_{\begin{subarray}{c}q_{k}\in\{1, \infty\}\\ k=1,2,\ldots,K\end{subarray}}Q_{\mathbf{j_{k}}.\mathbf{q_{k}}}N_{\mathbf{j_{k}}.\mathbf{q_{k}}}. \tag{2.8}\]
The expected sample size for multiple different treatment effects (\(\Theta=\{\theta_{1},\ldots,\theta_{K}\}\)) can then be found using Equation (2.8).
## 3 Motivating trial example
### Setting
One example of a platform trial is FLAIR, which focused on chronic lymphocyte leukemia Howard et al. (2021). FLAIR initially planned to incorporate an additional active treatment arm
and conduct an interim analysis midway through the intended sample size for each treatment. During the actual trial, two extra arms were introduced, including an additional control arm. The original trial design primarily addressed the pairwise type I error due to the inclusion of both additional experimental and control arms.
Following Greenstreet et al. (2021), a hypothetical trial that mirrors some aspects of FLAIR will be studied. In this hypothetical trial the familywise error rate (FWER) in the strong sense will be controlled. Controlling the FWER may be seen as crucial in this scenario, as the trial aims to assess various combinations of treatments involving a common compound for all active treatments (Wason et al., 2014). There is an initial active treatment arm, a control arm, and a planned addition of one more active treatment arm during the trial. We apply the proposed methodology to ensure FWER control and consider the conjunctive power and pairwise power.
The pairwise power is the main focus of the simulation study rather than the disjunctive power, as a potential drawback of disjunctive power is it is highly dependent on the treatment effect of all the treatments in the study, even the ones without a clinically relevant effect. For example assume one treatment has a clinically relevant effect and the rest have effect equal to the control treatment, then the disjunctive power will keep increasing the more treatments that are added, if one keeps the same bounds, even though the probability of taking the correct treatment forward does not increase. Equally the minimum the disjunctive power can be is equal to the pairwise power. This is when only one treatment has a clinically relevant effect and the rest have an extreme negative effect. A further advantage of the pairwise power is it gives the probability of the treatment with the greatest treatment effect being found, assuming that this treatment has effect equal to the clinically relevant effect.
Considering the planned effect size from FLAIR, we assume an interesting treatment difference of \(\theta^{\prime}=-\log(0.69)\) and a standard deviation of \(\sigma=1\). It should be noted that while FLAIR used a time-to-event endpoint with 0.69 representing the clinically relevant hazard ratio between the experimental and control groups, our hypothetical trial will focus on continuous endpoints using a normal approximation of time-to-event endpoints as discussed in Jaki and Magirr (2013). The desired power is 80%. We will maintain the same power level as FLAIR while targeting a one-sided FWER of 2.5%. The active treatment arms interim analysis will be conducted midway through its recruitment and 1:1 allocation will be used between the control and the active treatments as done in FLAIR (Hillmen et al., 2023).
The difference between a design which controls the pairwise power and the conjunctive power will be studied in Section 3.2. Additionally, for both pairwise power and the conjunctive power, the number of patients per arm per stage, the maximum sample size, the expected sample size and the disjunctive power will be studied. In Section 3.3 the effect of different numbers of patients recruited to the control before the second treatment is added (\(n(2)\)) will be studied with the focus being on expected sample size and maximum sample size of the trial. The designs will be compared to running two completely separate independent trials for each of the 2 active treatments. When running two trials there would be less expectation to control the FWER across the two trials. Therefore along with the fair comparison of type I error control of 2.5% across the multiple separate studies, the setting of having pairwise error rate being controlled for each at 2.5% will be shown. In Section 3.4 the effect of using a more liberal FWER control compared to type I error control for the separate trials is studied for trials with 3 and 4 active
arms.
### Comparing the two types of power
We will consider the effect of adding the second treatment halfway through recruitment of the first active treatment, both for ensuring pairwise power and conjunctive power are at 80%. The binding triangular stopping boundaries will be used (Whitehead, 1997; Wason and Jaki, 2012; Li et al., 2020). The stopping boundaries are the same regardless of if one is controlling pairwise power or conjunctive power as \(r(2)=r_{1,1}\) for both. The stopping boundaries are given in Table 1 and are equal for both designs.
In Table 1 the sample size when ensuring that the pairwise power is greater than 80% is given. Both active treatments will have up to 152 patients recruited to them and the control treatment can have up to 228 patients. This is due to 76 patients already being recruited to the control before the second treatment is added. The maximum sample size for the pairwise power design is therefore \(\max(N)=152+152+228=532\). Additionally in Table 1 the sample size when ensuring that the conjunctive power is greater than 80% is given. The maximum sample size now is \(\max(N)=192+192+288=672\). The calculations were carried out using R (R Core Team, 2021) with the method given here having the multivariate normal probabilities being calculated using the packages mvtnorm(Genz et al., 2021) and gtools(Warnes et al., 2021). Code is available at [https://github.com/pgreenstreet/Platform_trial_multiple_superior](https://github.com/pgreenstreet/Platform_trial_multiple_superior).
Based on the two designs in Table 1, in Table 2 the conjunctive power, pairwise power and disjunctive power for different values of \(\theta_{1}\) and \(\theta_{2}\) are given along with the expected sample size. The values of \(\theta_{1}\) and \(\theta_{2}\) are chosen to study the effects under the global null, when treatments have a clinically relevant effect and when one of the active treatments performs considerably worst than the rest. Table 2 shows when \(\theta_{1}\) and \(\theta_{2}\) equals the clinically relevant effect \(\theta^{\prime}\) under the design for pairwise power, that the pairwise power of both treatments is 80.0%; the conjunctive power is 66.0%; the disjunctive power is 94.1%; and the expected sample size is 420.6. This highlights the fact that when controlling the pairwise power that if both treatments have a clinically relevant effect there is a large chance (44%) that one may miss at least one of the two treatments.
When studying the design in which conjunctive power is controlled one can now see that the pairwise power and disjunctive power is much greater compared to the pairwise power design. This comes with a large increase in both expected and maximum sample size, for example the maximum sample size has increased by 140 patients.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c} Design & \multicolumn{2}{c|}{\(U=\begin{pmatrix}U_{1}\\ U_{2}\end{pmatrix}\)} & \multicolumn{2}{c|}{\(L=\begin{pmatrix}L_{1}\\ L_{2}\end{pmatrix}\)} & \multicolumn{2}{c|}{\(\begin{pmatrix}n_{1,1}&n_{1,2}\\ n_{2,1}&n_{2,2}\end{pmatrix}\)} & \multicolumn{2}{c|}{\(\begin{pmatrix}n_{0,1,1}&n_{0,1,2}\\ n_{0,2,1}&n_{0,2,2}\end{pmatrix}\)} & \multicolumn{2}{c|}{\(\begin{pmatrix}n(1)\\ n(2)\end{pmatrix}\)} & \multicolumn{2}{c|}{\(\begin{pmatrix}n(1)\\ n(2)\end{pmatrix}\)} & \multicolumn{2}{c|}{\(\begin{pmatrix}\max(N)\)} \\ \hline Pairwise power & \(\begin{pmatrix}2.501&2.358\\ 2.501&2.358\\ 2.501&2.358\\ 2.501&2.358\\ \end{pmatrix}\) & \(\begin{pmatrix}0.834&2.358\\ 0.834&2.358\\ 0.834&2.358\\ \end{pmatrix}\) & \(\begin{pmatrix}76&152\\ 76&152\\ 96&192\\ 96&192\\ \end{pmatrix}\) & \(\begin{pmatrix}76&152\\ 152&228\\ 96&192\\ 192&288\\ \end{pmatrix}\) & \(\begin{pmatrix}76&152\\ 76\\ 76&192\\ \end{pmatrix}\) & \(\begin{pmatrix}76&152\\ 152&228\\ 96&192\\ \end{pmatrix}\) & \(\begin{pmatrix}76&152\\ 76&16\\ 96&192\\ \end{pmatrix}\) & \(\begin{pmatrix}76&152\\ 76&192\\ 96&192\\ \end{pmatrix}\) & \(\begin{pmatrix}76&152\\ 76&16\\ 96&192\\ \end{pmatrix}\) & \(\begin{pmatrix}76&192\\ 96&192\\ 192&288\\ \end{pmatrix}\) & \(\begin{pmatrix}0\\ 76\\ 96\\ \end{pmatrix}\) & \(\begin{pmatrix}0\\ 76\\ 96\\ \end{pmatrix}\) & \(\begin{pmatrix}532\\ 96\\ \end{pmatrix}\) \\ \end{tabular}
\end{table}
Table 1: The stopping boundaries and sample size of the proposed designs, for both control of pairwise power and of conjunctive power.
As seen for the design for conjunctive power section of Table 2 the disjunctive power when treatment 1 and 2 have effect \(\theta^{\prime},0\), respectively, does not equal the disjunctive power of treatment 1 and 2 when the effect is \(0,\theta^{\prime}\). This is because the outcome of treatment 1's test statistic has a larger effect on treatment 2 then the other-way round. For example treatment 1 first stage is always independent of treatment 2. However for treatment 2 it's first stage is only independent of treatment 1 if treatment 1 stops after its first stage. Therefore \(\Sigma_{(1,2)}\neq\Sigma_{(2,1)}\). However as can be seen this difference in the cases studied is very small.
In Table 2 shows when there is only one treatment with a clinically relevant effect the conjunctive power equals the pairwise power of that treatment. When neither treatment has a clinically relevant effect the conjunctive power equals 100%, as there are no treatments with a clinically relevant effect that need to be found. As a result the trial has already resulted in all the clinically relevant treatments being declare i.e 0 treatments.
The expected sample size is greatly dependent on which treatment has the clinically relevant effect and which does not. For example when studying the the design with pairwise power control the expected sample size when the treatment effect is \(\theta^{\prime},0\), is 372.7. This is compared to 396.6 when the treatment effect is \(0,\theta^{\prime}\) for treatment 1 and 2 respectively. This difference is because the probability of treatment \(k\) stopping after the first stage is higher when \(\theta_{k}=0\) compared to \(\theta_{k}=\theta^{\prime}\). Therefore when the second treatment has effect 0 it is more likely that the trial will stop after the second stage of the trial. This reduces the amount of patients on average being recruited to the control treatment.
In Table 2 it can be seen that the pairwise power for the treatment with a clinically relevant effect is equal to the disjunctive power when the other treatment has an extremely negative treatment effect compared to the control. This is as there is no longer a chance that the other treatment can be taken forward. Therefore \(\theta_{1}=-\infty\)\(\theta_{2}=\theta^{\prime}\) or \(\theta_{1}=\theta^{\prime}\)\(\theta_{2}=-\infty\), is the point when the pairwise, disjunctive and conjunctive power are all equal. When one treatment has effect \(\theta^{\prime}\) and the other has effect equal to the control the disjunctive power is greater than the pairwise power, as there is still a chance that the other treatment may be taken forward. In Table 2 it is shown that when both treatments have effect 0 the disjunctive power is equal to the FWER for the trial. In addition when a treatment has effect 0 this results in the pairwise power for that treatment equalling the PWER.
In the Supporting Information Section 3 results for using both O'Brien and Fleming (O'Brien and Fleming, 1979) and Pocock boundaries (Pocock, 1977) are shown, with the futility boundary equal to 0 (Magirr et al., 2012). Additionally the results for using non binding triangular stopping boundaries are shown in the Supporting Information Section 4. Overall Table 1 and Table 2 have shown that the choice of type of power to control may be highly dependent on the sample size available, as if the design ensures conjunctive power of level \(1-\beta\) it will ensures pairwise power of at least \(1-\beta\) but the opposite does not hold. However the sample size for a trial designed for pairwise power will be less than that of a design for conjunctive power.
### Comparison with running separate trials
This section studies the effect on maximum and expected sample size depending on when the additional treatment arm is added to the platform trial. The examples for both conjunctive power and pairwise power are compared to running two separate trials. There are two settings
for separate trials which are considered. Setting 1 is when the type I error across both the trials is set to be \(2.5\%\), therefore, the type I error for each is \(1-\sqrt{1-0.025}=1.26\%\). For Setting 2 the type I error of each trial is controlled at \(2.5\%\). For the separate trials which are compared to the pairwise power, the power level for each is set to \(80\%\). This results in the following sample size and stopping boundaries for the two trials for Setting 1,
\[U_{1}=\begin{pmatrix}2.508&2.364\end{pmatrix},\quad L_{1}=\begin{pmatrix}0.836&2. 364\end{pmatrix}.\quad\begin{pmatrix}n_{1,1}&n_{1,2}\end{pmatrix}=\begin{pmatrix}7 7&154\end{pmatrix}.\]
with \(n_{0,1,1}=n_{1,1}\), \(n_{0,1,2}=n_{1,2}\) and \(n(1)=0\). Setting 2 gives:
\[U_{1}=\begin{pmatrix}2.222&2.095\end{pmatrix},\quad L_{1}=\begin{pmatrix}0.741&2.095\end{pmatrix}.\quad\begin{pmatrix}n_{1,1}&n_{1,2}\end{pmatrix}=\begin{pmatrix}6 5&130\end{pmatrix}.\]
with \(n_{0,1,1}=n_{1,1}\), \(n_{0,1,2}=n_{1,2}\) and \(n(1)=0\). For comparison with the conjunctive power designs the probability of finding both treatments across the two trials is set to \(80\%\). The required power for each trial is therefore \(\sqrt{1-\beta}=0.894\). The boundaries remain the same for both settings as the type I error remains the same. The new sample size for Setting 1 is \(\begin{pmatrix}n_{1,1}&n_{1,2}\end{pmatrix}=\begin{pmatrix}98&196\end{pmatrix}\) and for Setting 2 is \(\begin{pmatrix}n_{1,1}&n_{1,2}\end{pmatrix}=\begin{pmatrix}85&170\end{pmatrix}\).
Figure 1 gives the maximum sample size and the expected sample size under different \(\theta_{1},\theta_{2}\) depending on when the second treatment is added, for the pairwise power control of \(80\%\). Figure 2 gives similar results however the focus now is on control of the conjunctive power at \(80\%\).
As indicated in Figure 1, when controlling the pairwise power, if the second active treatment is introduced at the beginning of the trial, the total sample size required is \(456\), whereas if it is added at the end of recruitment for treatment \(1\), the total sample size becomes \(616\). This increase in sample size is attributable to two factors. Firstly, there is a necessity to increase
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \multicolumn{8}{c}{**Design for pairwise power**} \\ \hline Treatment effect & \multicolumn{2}{c|}{Pairwise power} & \multicolumn{2}{c|}{Conjunctive power} & \multicolumn{2}{c|}{Disjunctive power} & \multicolumn{2}{c}{Expected sample size} \\ \(\theta_{1}\) & \(\theta_{2}\) & \(P_{PW,1}\) & \(P_{PW,2}\) & \(P_{C}\) & \(P_{D}\) & \(E(N|\theta_{1},\theta_{2})\) \\ \hline \(\theta^{\prime}\) & \(\theta^{\prime}\) & 0.800 & 0.800 & 0.660 & 0.941 & 420.6 \\ \(\theta^{\prime}\) & 0 & 0.800 & 0.013 & 0.800 & 0.802 & 372.7 \\ \(\theta^{\prime}\) & \(-\infty\) & 0.800 & 0 & 0.800 & 0.800 & 342.9 \\ \(0\) & \(\theta^{\prime}\) & 0.013 & 0.800 & 0.800 & 0.802 & 396.6 \\ \(0\) & 0 & 0.013 & 0.013 & 1 & 0.025 & 348.7 \\ \(-\infty\) & \(\theta^{\prime}\) & 0 & 0.800 & 0.800 & 0.800 & 381.7 \\ \hline \multicolumn{8}{c}{**Design for conjunctive power**} \\ \hline Treatment effect & \multicolumn{2}{c|}{Pairwise power} & \multicolumn{2}{c|}{Conjunctive power} & \multicolumn{2}{c|}{Disjunctive power} & \multicolumn{2}{c}{Expected sample size} \\ \(\theta_{1}\) & \(\theta_{2}\) & \(P_{PW,1}\) & \(P_{PW,2}\) & \(P_{C}\) & \(P_{D}\) & \(E(N|\theta_{1},\theta_{2})\) \\ \hline \(\theta^{\prime}\) & \(\theta^{\prime}\) & 0.890 & 0.890 & 0.801 & 0.979 & 508.1 \\ \(\theta^{\prime}\) & 0 & 0.890 & 0.013 & 0.890 & 0.890 & 463.0 \\ \(\theta^{\prime}\) & \(-\infty\) & 0.890 & 0 & 0.890 & 0.890 & 425.4 \\ \(0\) & \(\theta^{\prime}\) & 0.013 & 0.890 & 0.890 & 0.891 & 485.6 \\ \(0\) & 0 & 0.013 & 0.013 & 1 & 0.025 & 440.5 \\ \(-\infty\) & \(\theta^{\prime}\) & 0 & 0.890 & 0.890 & 0.890 & 466.7 \\ \end{tabular}
\end{table}
Table 2: Operating characteristics of the proposed designs under different values of \(\theta_{1}\) and \(\theta_{2}\), for both control of pairwise power and of conjunctive power.
the number of patients recruited to the control group until treatment 2 has completed the trial. Secondly, the decrease in correlation between the two treatments results in an enlargement of the boundaries to maintain control over the family-wise error rate. It is this secondary factor which causes the small jumps in maximum sample size seen in Figures 1 and 2.
In Figure 1 when comparing the platform designs with pairwise power control, to running two separate trials it can be seen that, for the case that the pairwise error for each trial is \(2.5\%\), once the second treatment is added after \(64\) patients have been recruited to the control (\(n(2)\geq 64\)) the maximum sample size of running the platform design is greater than or equal to that of running two separate trials, which is \(520\) patients. However when controlling the error across both separate trials the maximum sample size is now the same as when adding the second treatment at the end of recruitment for the first treatment in the platform design so \(616\). For Setting 1 it can be seen that the expected sample size for separate trials can be better than that of the platform design. In the case of \(\theta_{1}=-\infty\) and \(\theta_{2}=-\theta^{\prime}\) then once \(n(2)\geq 81\) the expected sample size of running the platform design is greater than that of running two separate trials. This is because in the platform approach the control cannot stop until each treatment has finished testing, whereas in the separate trial case each control group will stop as soon as either treatment is dropped. For Setting 1 there are some cases studied which cannot be seen in Figure 1. These are \(\theta_{1}=\theta^{\prime}\), \(\theta_{2}=\theta^{\prime}\) and if \(\theta_{1}=\theta^{\prime}\), \(\theta_{2}=0\) as both these are at the point \(n(2)\geq 117\) which matches that of \(\theta_{1}=\theta^{\prime}\), \(\theta_{2}=-\infty\). When studying the expected sample size of the Setting 2 compared to the platform designs it can be seen that if \(\theta_{1}=-\infty\) and \(\theta_{2}=-\theta^{\prime}\) then once \(n(2)\geq 15\) the expected sample size of running the platform design is greater than that of running two separate trials. The expected sample size for two separate trials when \(\theta_{1}=-\infty\) and \(\theta_{2}=\theta^{\prime}\) is \(319.5\).
When controlling the conjunctive power, as in Figure 2, if the second active treatment is introduced at the beginning of the trial, the total sample size required is \(558\), whereas if it is added at the end of recruitment for treatment \(1\), the total sample size becomes \(784\). Once again the maximum sample size for Setting 1 equals that of when treatment \(2\) is added after treatment \(1\) finished recruitment so \(784\) patients. In Figure 2, when \(n(2)\geq 104\) the maximum sample size of running the platform design is greater than or equal to that of running two separate trials under Setting 2, which is \(680\) patients. Similar as seen in Figure 1 there is some lines which overlap for Setting 1 in Figure 2 as \(n(2)=143\) is the point for both \(\theta_{1}=\theta^{\prime}\), \(\theta_{2}=\theta^{\prime}\) and \(\theta_{1}=\theta^{\prime}\), \(\theta_{2}=-\infty\), also \(n(2)=121\) is the point for both \(\theta_{1}=0\), \(\theta_{2}=\theta^{\prime}\) and \(\theta_{1}=0\), \(\theta_{2}=0\). When \(n(2)\geq 104\) for Setting 1, and \(n(2)\geq 39\) for Setting 2, the expected sample size of running the platform design is greater than that of running two separate trials when \(\theta_{1}=-\infty\) and \(\theta_{2}=\theta^{\prime}\). The expected sample size for running two separate trials when \(\theta_{1}=-\infty\) and \(\theta_{2}=\theta^{\prime}\) is \(475.3\) and \(403.8\) for Setting 1 and Setting 2 respectively.
Overall Figures 1 and 2 have shown there maybe times that there is no benefit to running a platform trial with regards to sample size, depending on when the later treatment is added to the trial. This issue is further emphasised when there is not the expectation to control the type I error across all the individual trials as seen in Setting 2.
Figure 1: Both panels give the maximum sample size and the expected sample size under different \(\theta_{1},\theta_{2}\) depending on the value \(n(2)\), for the pairwise power control of 80%. Left panel: dash vertical lines correspond to the points where the maximum/expected sample size of the trial is now greater than running two separate trials with type I error control across both trials set to 2.5%. Right panel: dash vertical lines correspond to the points where the maximum/expected sample size of the trial is now greater than running two separate trials with type I error control for each trial set to 2.5%.
Figure 2: The maximum sample size and the expected sample size under different \(\theta_{1},\theta_{2}\) depending on the value \(n(2)\), for the conjunctive power control of 80%. Left panel: dash vertical lines correspond to the points where the maximum/expected sample size of the trial is now greater than running two separate trials under Setting 1. Right panel: dash vertical lines correspond to the points where the maximum/expected sample size of the trial is now greater than running two separate trials under Setting 2.
### Comparison with running separate trials under different controls of type I error
When designing a multi-arm trial one may find that the expected control of the FWER is less than that of the type I error control for an individual trial, as seen in the TAILoR trial for example (Pushpakom et al., 2015, 2020). Therefore in Table 3 we consider the effect of allowing FWER control of 5% one sided compared to 2.5% type I error for the individual trials. In this table the same design parameters where used as above, however, now the number of active arms has increased in the hypothetical trial to 3 or 4, and the number of stages is now either 1,2 or 3. In Table 3 the focus is on controlling the power at the desired 80% level with the pairwise power being the focus for the top half and conjunctive power for the bottom half. When controlling the conjunctive power the power for each separate trial is \((1-\beta)^{1/k}\). In these hypothetical trials it is assumed that each one of the arms is added sequentially, with an equal gap between each one. Therefore in the 3 active arm case if the second arm is added after 20 patients have been recruited to the control then the third arm will be added after a total of 40 patients have been recruited to the control.
In Table 3 the first 2 columns give the number of active arms and stages for the platform trial, respectively. The third and forth columns then gives the sample size per stage and the maximum sample size of the individual trials, respectively. This has been chosen as this number will remain constant throughout, as it is unaffected by the timing of when the next arm is ready due to each trial being completely separate from the others. The remaining columns give when there is no benefit with regards to the maximum and expected sample size of conducting a platform trial compared to running separate trials, with respect \(n(k)-n(k-1)\). The value of \(n(k)-n(k-1)=n(2)\) as the first treatment is added at the beginning of the trial. In the Supporting Information Section 5 the plots for the 2 stage and 3 stage example trials as given in Table 3 are shown.
Using Table 3, for the 3 active arm, 2 stage example each separate trial has \(n_{1,1}=65\) and \(n_{1,2}=130\). The total maximum sample size of running these 3 separate trials is therefore 780. Once the second treatment is planned to be added after 105 patients recruited to the control, (therefore 210 recruited to the control before treatment 3), there is no benefit in using the platform design with respect to maximum sample size. For the expected sample size four different configurations of the treatment effects are studied. The first (\(\Theta_{1}\)) assumes all the treatments have the clinically relevant effect, so \(\theta_{k}=\theta^{\prime}\) for \(k=1,\ldots,K\). The second (\(\Theta_{2}\)) assumes only the first treatment has a clinically relevant effect and the rest have effect equal to that of the control treatment, so \(\theta_{1}=\theta^{\prime}\), \(\theta_{k}=0\) for \(k=2,\ldots,K\). The third (\(\Theta_{3}\)) assumes only the last treatment has a clinically relevant effect and the rest equal the control, so \(\theta_{K}=\theta^{\prime}\), \(\theta_{k}=0\) for \(k=1,\ldots,K-1\). The forth configuration (\(\Theta_{4}\)) assumes all the treatments have effect equal to that of the control treatment, so the global null, so \(\theta_{k}=0\) for \(k=1,\ldots,K\). For the expected sample size for the 4 treatment effect configurations studied here there is no benefit in using a platform trial after potentially just 62 patients if \(\Theta_{3}\) is true, this does rise to 73 if \(\Theta_{1}\) is true, if the focus is on sample size.
Table 3 shows that the maximum sample size of running separate trials increases with increase in number of stages or arms. This is also the case when running the proposed platform trial design. As can be seen with respect to maximum sample size the more stages the trial has the later a
treatment can be added before the maximum sample size becomes worst than running separate trials. For example, when pairwise power is controlled, a 1 stage 3 arm trial with regards to maximum sample size one should use separate trials after 90 patients this is compared to 114 patients for a 3 arm 3 stage trial.
If the focus is on the expected sample size, than for the examples studied here, increasing the number of stages results in a decrease time before one would switch to separate trials. For example when controlling the conjunctive power, for the 4 arm trial, it can be seen that the expected sample size under the global null for running separate trials becomes less than that of running the platform trials when \(n(2)=140\) for 1 stage case compared to \(n(2)=99\) for the 3 stage version. This is because the ability to have interim analyses saves more patients for separate trials with respect to expected sample size. This is because in separate trials when a treatment is stopped earlier either for futility or superiority the control treatment also stops. Therefore in this 4 arm example there are 4 sets of control treatments which can stop early compared to only 1 set for the platform design. Additionally for the platform trial the control can only stop once all the active treatments have stopped. This is why the expected sample size under \(\Theta_{2}\) is less then that of \(\Theta_{3}\), as if the final treatment has a clinically relevant effect then it will on average have more stages than a treatment with effect equal to that of the control for the configuration studied here.
This section has therefore shown that there are periods in which using a platform trial can be beneficial with regards to sample size if one can use a more liberal type I error control compared to that used for individual trials. However this has also shown that if treatments are added late into the trial there may not be benefit, so highlighting the importance of considering which trial design should be use.
## 4 Discussion
This paper has built on the work of Greenstreet et al. (2021) to show how one can control the FWER for a trial in which the treatments can be preplanned to be added at any point. This work has then studied the different approaches for powering the trial in which the trial will continue even if a superior treatment is found. This paper shows how the expected sample size and sample size distribution can be found. Finally a hypothetical trial, motivated by FLAIR (Howard et al., 2021) is discussed. This section evaluates the pairwise and conjunctive power when the second active treatment is added halfway through recruitment for the first active treatment. We investigate the operating characteristics for multiple values of \(\theta_{1}\) and \(\theta_{2}\). Then the section goes onto study the effect of adding the later treatments at different points in the platform design and compares these trial designs to running separate trials.
The designs flexibility to incorporate the addition of treatments at any point during a trial allows for the creation of multiple designs that depend on when the treatments are introduced. This approach works effectively until the completion of the initial stage for the treatment that initiated the trial. Up to this point, the treatments can be added when it becomes available, and the boundaries can be set accordingly. However, if the treatments are not ready until after the first analysis, two options can be pursued to avoid bias resulting from knowledge of the first stage results. Firstly, one can choose not to plan for the addition of the treatments and conduct separate trials. As demonstrated in Section 3, this approach may require fewer patients
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c} \multicolumn{8}{c}{**Design for pairwise power**} \\ \hline Active arms & Stages & \multicolumn{3}{c|}{Separate trial} & \multicolumn{2}{c|}{\(\min_{n(2)}(\max(N_{s})\)} & \multicolumn{2}{c}{\(\min_{n(2)}(E(N_{s}|\Theta)\leq E(N|\Theta))\)} \\ \(K\) & \(J\) & \((n_{1,1},\ldots,n_{1,J})\) & \(\max(N_{s})\) & \(\leq\max(N))\) & \(\Theta_{1}\) & \(\Theta_{2}\) & \(\Theta_{3}\) & \(\Theta_{4}\) \\ \hline
3 & 1 & 115 & 690 & 90 & 90 & 90 & 90 & 90 \\
3 & 2 & (65, 130) & 780 & 105 & 73 & 72 & 62 & 66 \\
3 & 3 & (46, 92, 138) & 828 & 114 & 68 & 67 & 55 & 60 \\
4 & 1 & 115 & 920 & 79 & 79 & 79 & 79 & 79 \\
4 & 2 & (65, 130) & 1040 & 94 & 61 & 62 & 54 & 59 \\
4 & 3 & (46, 92, 138) & 1104 & 103 & 59 & 58 & 49 & 55 \\ \hline \multicolumn{8}{c}{**Design for conjunctive power**} \\ \hline Active arms & Stages & \multicolumn{3}{c|}{Separate trial} & \multicolumn{2}{c|}{\(\min_{n(2)}(\max(N_{s})\)} & \multicolumn{2}{c}{\(\min_{n(2)}(E(N_{s}|\Theta)\leq E(N|\Theta))\)} \\ \(K\) & \(J\) & \((n_{1,1},\ldots,n_{1,J})\) & \(\max(N_{s})\) & \(\leq\max(N))\) & \(\Theta_{1}\) & \(\Theta_{2}\) & \(\Theta_{3}\) & \(\Theta_{4}\) \\ \hline
3 & 1 & (171) & 1026 & 143 & 143 & 143 & 143 \\
3 & 2 & (97, 194) & 1164 & 166 & 107 & 109 & 101 & 106 \\
3 & 3 & (68, 136, 204) & 1224 & 174 & 98 & 99 & 92 & 98 \\
4 & 1 & (185) & 1480 & 140 & 140 & 140 & 140 & 140 \\
4 & 2 & (105, 210) & 1680 & 167 & 102 & 109 & 103 & 109 \\
4 & 3 & (74, 148, 222) & 1776 & 182 & 93 & 99 & 93 & 99 \\ \end{tabular} Key: \(N_{s}\) is the sample size of running K separate trials, \(\Theta_{1}\): \(\theta_{k}=\theta^{\prime}\) for \(k=1,\ldots,K\); \(\Theta_{2}\): \(\theta_{1}=\theta^{\prime}\), \(\theta_{k}=0\) for \(k=2,\ldots,K\) ; \(\Theta_{3}\): \(\theta_{K}=\theta^{\prime}\), \(\theta_{k}=0\) for \(k=1,\ldots,K-1\); \(\Theta_{4}\): \(\theta_{k}=0\) for \(k=1,\ldots,K\).
\end{table}
Table 3: The comparison of using the proposed platform design with FWER of 5% one sided against running separate trials with type I error control of each at 2.5% one sided, for different numbers of arms and stages.
overall. Alternatively, one can predefine the times at which the treatments will be added and utilize the corresponding bounds. A drawback here is that if the treatments are not ready by the predefined points, they cannot be added. Nevertheless, for the remaining treatments, the control of family-wise error rate will be maintained. Due to the bounds being designed to control FWER across all the hypotheses, therefore, by not adding a treatment and so removing a hypothesis this reduces the maximum value of the FWER.
This paper has highlighted a potential issue of increased expected and maximum sample size when requiring strong control of family-wise error rate for a platform trial in which an arm is added later. If one would run two completely separate trials the FWER across the trials would likely not be expected. As a result there is a lot of time where there is no benefit to the platform trial design with regards to maximum or expected sample size as was shown Figure 1 and Figure 2 for Setting 2. This point has been further emphasised in Table 3 which shows that even with a more liberal FWER control compared to the type I error control of each individual trial there are still many points where one may be better of running separate trials with respect to sample size. This work therefore reiterates the importance of the discussions around type I error control in platform trials (Molloy et al., 2022; Wason et al., 2014, 2016; Howard et al., 2018; Proschan and Waclawiw, 2000; Proschan and Follmann, 1995; Nguyen et al., 2023).
If one instead wants to control the pairwise error, as done for example in STAMPEDE (Sydes et al., 2009), one can use Equation (2.4), now replacing \(\theta^{\prime}\) with 0. An additional advantage of using the PWER, if controlling the pairwise power, is that the stopping boundaries and the sample size required for each active arm are independent of when the arm is added. Therefore the only change will be how many patients need to be recruited to the control. However one may be find PWER in a platform trial insufficient for error control (Wason et al., 2014; Molloy et al., 2022) and may not meet the regulators requirements.
Building upon this research, a study could be conducted to investigate the impact of having different numbers of stages and stopping boundaries while maintaining equal power and type I error for each treatment, utilizing the approach described in Section 2. However, such an investigation would likely require multiple changes in the allocation ratio, resulting in potential issues with time trends. One could therefore examine methods to handle these time trends, as explored in Lee and Wason (2020); Marschner and Schou (2022); Roig et al. (2023); Greenstreet et al. (2021). Furthermore a change in allocation ratio between treatments can result in different PWER and pairwise power for each treatment if using the same boundaries for each treatment therefore one could use an iterative approach such as that discussed in Greenstreet et al. (2021). Equally one could study the effect of using non-concurrent controls, but once again this can face a large issue with time trends. The main issue with these time trends is that they are unknown. However one could look into incorporating approaches to reduce the bias potentially caused (Lee and Wason, 2020; Marschner and Schou, 2022; Wang et al., 2022; Saville et al., 2022).
In Section 3.4 it was assumed for the multi-arm trials that each treatment was added after an equal number of control treatments were recruited so \(n(k)-n(k-1)=n(2)\) for \(k=2,\ldots,K\). This may however not be the case. One may therefore wish to consider the effect of having multiple treatments beginning the study and then adding additional treatments later. The methodology presented in Section 2 allows for these changes. However when it comes to the comparison designs there are now multiple options that can be chosen. As done in Section 3.4
one could use separate trials for each comparison, however one could consider using multiple MAMS trials where all treatments begin at once, or a mix of the two. Further points to be considered here is how one can evenly share the power across all these trial types, especially if the focus is on conjunctive power, and also how the type I error should be defined for each comparison trial.
Furthermore, this work could be expanded to incorporate adaptive boundaries that adjust once a treatment is deemed effective, as discussed in Urach and Posch (2016) for the case of multi-arm multi-stage (MAMS) trials. However, such an adaptation would result in a less preplanned design so potential further complications in understanding for the clinicians, the trial statisticians and the patients. Additionally, determining the point at which the conjunctive power is at its lowest may no longer be feasible, as dropping each arm would lead to lower bounds for the remaining treatments, thus affecting the conjunctive power assessment. This adaptive approach will likely result in uneven distribution of errors across the treatments added at different points. If one was to then adjust for this one may encounter issues with time trends as the allocation ratio may need to change mid trial.
This paper has given a general formulation for designing a preplanned platform trial with a normal continuous endpoint, and using the work of Jaki and Magirr (2013) one could apply this methodology to other endpoint such as time-to-event used in FLAIR (Howard et al., 2021). When using this approach one should be aware of computational issues from calculating high dimensional multivariate normal distributions, if one has a large number of arms and stages in the trial design. If this is an issue then one can restrict to only adding arms at the interims so one can utilise the method of Dunnett (1955) as discussed in Magirr et al. (2012); Greenstreet et al. (2021).
### Acknowledgements
This report is independent research supported by the National Institute for Health Research (NIHR300576). The views expressed in this publication are those of the authors and not necessarily those of the NHS, the National Institute for Health Research or the Department of Health and Social Care (DHSC). TJ and PM also received funding from UK Medical Research Council (MC_UU_0002/14 and MC_UU_0002/19, respectively). This paper is based on work completed while PG was part of the EPSRC funded STOR-i centre for doctoral training (EP/S022252/1). For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising.
### Conflict of Interest
The authors declare no potential conflict of interests. Alun Bedding is a shareholder of Roche Products Ltd.
| Platform試行における実装に関する関心の増加が、新規治療 arms の統合を trial の中で行うことができる柔軟性を提供し、治療の早期中止を可能にする能力を提供しています。この試行において、エラーの抑制は重要であり、この論文では、事前計画された段階的なデザインを導入することで、プラットフォーム試行において、任意の時点での新規治療 arms の追加を可能にしながら、家族内エラー率を制御する。この論文では、治療が継続的に試験される場合、期待される統計的力のレベルを達成するために必要なサンプルサイズを見出すことを重点的にしています。この場合、上位治療が発見された後も、さらなる治療のテストを継続する場合には、このアプローチは有用です。特に、他のスポンサーが独自の優位性を示す治療や、複数の投与をテストしている場合に有用です。サンプルサイズを決定するための計算式が提供されています。複数の構成の試行を基に、動機 |
2303.07781 | Non-Concentration of Primes in $Γ\backslash PSL_2(\mathbb{R})$ | This paper generalizes the result of Sarnak and Ubis \cite{sarnak-ubis} about
non-concentration of primes in horocycle orbits on $PSL_2(\mathbb{Z})
\backslash PSL_2(\mathbb{R})$ to any lattice in $PSL_2(\mathbb{R})$. The proof
combines the asymptotic result of Str\"ombergsson \parencite{strombergsson} and
Venkatesh's method \parencite{venkatesh} with the approach of Sarnak and Ubis
of approximating horocycle pieces with periodic horocycles. The key step is to
establish a dichotomy between $\{\xi h(t), t \in [0, T] \}$ having good
equidistribution in $\Gamma \backslash PSL_2(\mathbb{R})$ and it being
approximable by closed horocycle pieces with small period. In a follow-up
paper, a similar approach will be used to show equidistribution of $\xi
h(n^{1+\gamma})$ for small $\gamma>0$, generalizing Venkatesh's result
\parencite{venkatesh} to non-compact $\Gamma$. | Lauritz Streck | 2023-03-14T10:45:56 | http://arxiv.org/abs/2303.07781v1 | # Non-concentration of primes in \(\Gamma\backslash PSL_{2}(\mathbb{R})\)
###### Abstract.
This paper generalizes the result of Sarnak and Ubis [9] about non-concentration of primes in horocycle orbits on \(PSL_{2}(\mathbb{Z})\backslash PSL_{2}(\mathbb{R})\) to any lattice in \(PSL_{2}(\mathbb{R})\). The proof combines the asymptotic result of Strombergsson [11] and Venkatesh's method [12] with the approach of Sarnak and Ubis of approximating horocycle pieces with periodic horocycles. The key step is to establish a dichotomy between \(\{\xi h(t),t\in[0,T]\}\) having good equidistribution in \(\Gamma\backslash PSL_{2}(\mathbb{R})\) and it being approximable by closed horocycle pieces with small period. In a follow-up paper, a similar approach will be used to show equidistribution of \(\xi h(n^{1+\gamma})\) for small \(\gamma>0\), generalizing Venkatesh's result [12] to non-compact \(\Gamma\).
## 1. Introduction
_General Introduction._ Let \(G=PSL_{2}(\mathbb{R})\) and \(\mu_{G}\) be the Haar measure on \(G\). Let \(\Gamma\) be a lattice in \(G\), that is, a discrete subgroup such that \(\mu_{X}\), the projection of the Haar measure to \(X=\Gamma\backslash G\), is finite (and assumed to fulfill \(\mu_{X}(X)=1\)). The dynamics of the space \(X\) with respect to \(\mu_{X}\) have been studied extensively in recent years, in part because of the strong connection to Diophantine approximation in the case of \(\Gamma=PSL_{2}(\mathbb{Z})\).
The group \(G\) can be parametrized in terms of
\[h(x):=\begin{pmatrix}1&x\\ 0&1\end{pmatrix}\quad a(y):=\begin{pmatrix}y^{\frac{1}{2}}&0\\ 0&y^{-\frac{1}{2}}\end{pmatrix}\quad k(\theta):=\begin{pmatrix}\cos\theta& \sin\theta\\ -\sin\theta&\cos\theta\end{pmatrix},\]
which induces a natural left-invariant metric \(d_{G}\) on \(G\). This metric descends to \(X\) via \(d_{X}(\Gamma g,\Gamma h)=\inf_{\gamma\in\Gamma}d_{G}(g,\gamma h)\).
The geodesic flow
\[g_{t}(g):=ga(e^{t})=\begin{pmatrix}ae^{\frac{t}{2}}&be^{-\frac{t}{2}}\\ ce^{\frac{t}{2}}&de^{-\frac{t}{2}}\end{pmatrix}\]
and the horocycle flow
\[h_{t}(g):=gh(t)=\begin{pmatrix}a&b+at\\ c&d+ct\end{pmatrix}\]
act ergodically on \((X,\mu_{X})\). The horocycle orbits were found to exhibit a very rigid behaviour. Furstenberg showed that \(\mu_{X}\) is uniquely ergodic
under \(h_{t}\) when \(X\) is compact [3]. For a general \(\Gamma\), there are periodic orbits under \(h_{t}\), but these carry all other invariant measures. Precisely, Dani and Smillie showed that both \(h_{t}(\xi),t\in\mathbb{R}\) and \(h_{n}(\xi),n\in\mathbb{N}\) equidistribute with respect to \(\mu_{X}\) unless \(\xi\) is periodic, in the sense \(t\mapsto h_{t}(\xi)\) periodic [1]. As all periodic orbits are isomorphic to the torus, questions are reduced to questions on tori if the point \(\xi\) is periodic.
With these questions settled, questions about the equidistribution of other orbits were raised. Shah conjectured that \(\xi h(n^{\alpha}),n\in\mathbb{N}\) equidistributes with respect to \(\mu_{X}\) for all \(\alpha\geq 1\) and \(\xi\) non-periodic [10]. Margulis conjectured that \(\xi h(p)\) would equidistribute with respect to \(\mu_{X}\), where \(p\) is running over the primes and \(\xi\) is non-periodic [7]. This paper provides partial progress in the latter question by proving that primes do not concentrate anywhere.
The way to showing non-concentration is through controlling averages of the form \(\frac{s}{T}\sum_{sn\leq T}f(ph(sn))\) and applying sieve methods. One natural way to do this is to use a smooth approximation of the primes and show equidistribution of this object, which we will do in this paper. We will take the pseudo random measure \(\nu\), which gets introduced below. Hopefully, the reader will find this presentation more coherent and easier to generalize. Using the Selberg sieve instead like Sarnak and Ubis do in [9] would also be possible.
Green and Tao used their pseudo random measure \(\nu\) as a smooth approximation of the primes to prove their celebrated theorem of the prime numbers containing arbitrarily long arithmetic progressions [5]. Like in the case of proving properties in function spaces through approximation by smooth functions, introducing \(\nu\) allowed them to split the proof of some properties of the primes into two parts: First, showing that \(\nu\) has certain properties (like the pseudo randomness condition of the \(k\)-linear forms condition in [5]) and second, that one can recover properties of the primes through these properties of \(\nu\) (the relative Szemeredi theorem in the case of arithmetic progressions). We will show in this paper that horocycle orbits along \(\nu\) equidistribute.
Goldston and Yildirim defined
\[\Lambda_{R}(n):=\sum_{k<R,\ k|n}\mu(k)\log\left(\frac{R}{k}\right),\]
modeled after the standard convolution identity for the von Mangoldt-function [4]. Green and Tao [5] defined the pseudo random measure \(\nu\) by
\[\nu(n):=\frac{1}{\log R}\Lambda_{R}^{2}(n).\]
Due to the particularities of finding arithmetic sequences, they restricted \(\nu\) to a coprime residue class of some big integer \(W\), which
is commonly called the \(W\)-trick. In our setting, it will not be necessary.
Define furthermore \(\operatorname{dist}(\Gamma g):=d_{X}(\Gamma g,p_{0})\) where \(p_{0}\in X\) could be any point, for example \(p_{0}=\Gamma\) (for the definition of \(d_{X}\), see Section 3). For a function \(f\in C^{4}(X)\) let \(\|f\|_{W^{4}}\) be its Sobolev norm in the Hilbert space \(W^{4,2}\) involving the fourth derivative and let \(\|f\|_{\infty,j}\) be the supremum norm of the \(j\)-th derivatives. Define
\[\|f\|:=\|f\|_{W^{4}}+\|f\|_{\infty,1}+\|f\|_{\infty,0}.\]
The main result of this paper is:
**Theorem 1.1**.: _Let \(\Gamma\subset PSL_{2}(\mathbb{R})\) be a lattice. Let \(R=T^{\theta}\), where \(0<\theta\leq\frac{\beta}{40}\) is fixed. Here, \(\beta\) is the constant depending only on \(\Gamma\) appearing in Theorem 1.4. For a non-periodic \(\xi\in X\) and a function \(f\in C^{4}(X)\) with \(\|f\|=1\),_
\[\left|\frac{1}{T}\sum_{n\leq T}f(\xi h(n))\nu(n)-\int f\;d\mu_{X}\right|\ll r ^{-\theta}+\frac{\log\log R}{\log R},\]
_where \(r=T\exp(-\operatorname{dist}(g_{\log T}(\xi)))\) and the implied constant depends only on \(\Gamma\). Because \(r\to\infty\) as \(T\to\infty\), the sequence \(\xi h(n)\) equidistributes along \(\nu\)._
The biggest possible size for \(\theta\) The second error term is due to the normalization of \(\nu\), while \(r\) rules the equidistribution on the horocycle segment up to time \(T\).
Theorem 1.1 recovers some of the properties of the primes. An immediate corollary is the generalization of the result proved by Sarnak and Ubis in [9] for \(PSL_{2}(\mathbb{Z})\).
**Corollary 1.2**.: _Take any non-periodic point \(\xi\in X\). For any non-negative function \(f\in C^{4}(X)\) with \(\|f\|=1\),_
\[\frac{1}{\pi(T)}\sum_{p\leq T}f(\xi h(p))\leq\frac{1}{\theta}\int f\;d\mu_{X} +O\left(r^{-\theta}+\frac{\log\log R}{\log R}\right)\]
_where the sum is over primes, and \(r\) and \(\theta\) are as in Theorem 1.1. In particular, any limit measure of the primes is absolutely continuous with respect to \(\mu_{X}\) and the primes are dense in a set of positive measure._
Corollary 1.2 could be proved without using Theorem 1.1 by adapting the proof of Theorem 1.1. One would use sieve methods instead of the normalization of \(\nu\) and the Siegel-Walfisz theorem instead of the Siegel-Walfisz type Theorem 2.4 for \(\nu\).
Unfortunately, these are all properties of the orbit under primes obtainable through Theorem 1.1. It falls short both of showing density and equidistribution of primes. The reason is that to recover properties of the primes, one needs stronger properties of \(\nu\) (like the \(k\)-linear forms
condition in [5], which is stronger than containing arithmetic progressions of length \(k\)). In the case of equidistribution of primes, one would probably need good equidistribution of \(\nu\) not along simple sums, but along sums of the type
\[\sum_{n\leq\frac{N}{s_{1}s_{2}}}f(\xi h(ns_{1}))f(\xi h(ns_{2}))\nu(n)\]
for natural numbers \(s_{1},s_{2}\). As sums of these types are not well understood at all, even when replacing \(\nu\) by \(1\), showing equidistribution of primes using \(\nu\) seems to require significant new input.
The methods employed may be of interest beyond the question in this paper; in particular, the following result may have other applications. It will be instrumental in the proof of equidistribution of \(\xi h(n^{1+\gamma})\) for small \(\gamma\) performed in a follow-up paper.
**Lemma 1.3**.: _Let \(p\in X\) and \(T\geq 0\). Let \(\delta>0\) and \(K\leq T\). There is an interval \(I_{0}\subset[0,T]\) of size \(|I_{0}|\leq\delta^{-1}K^{2}\) such that: For all \(t_{0}\in[0,T]\backslash I_{0}\), there is a segment \(\{\xi h(t),t\leq K\}\) of a closed horocycle approximating \(\{ph(t_{0}+t),0\leq t\leq K\}\) of order \(\delta\), in the sense that_
\[\forall 0\leq t\leq K:\quad d_{X}\left(ph(t_{0}+t),\xi h(t)\right)\leq\delta.\]
_The period \(P=P(t_{0},p)\) of this closed horocycle is at most \(P\ll r\), where \(r=T\exp(-\mathrm{dist}(g_{\log T}(p)))\) is as in Theorem 1.1. Moreover, one can assure \(P\gg\eta^{2}r\) for some \(\eta>0\) by weakening the bound on \(I_{0}\) to \(|I_{0}|\leq\max\left(\delta^{-1}K^{2},\eta T\right)\)._
This result is useful because it bridges the gap between the compact case with good asymptotics and the periodic case in some sense.
_Relation to Previous Papers._ Venkatesh showed that for cocompact lattices, \(\xi h(ns),1\leq n\leq T\), is equidistributed with error \(T^{-\epsilon}\) as long as \(s\ll T^{\epsilon}\)[12]. He deduced that \(\xi h(n^{1+\gamma})\) equidistributes for sufficiently small \(\gamma>0\) (where \(\gamma\) and \(\epsilon\) depend only on the lattice). This is the result that will be generalized to all lattices \(\Gamma\subset PSL_{2}(\mathbb{R})\) in the aforementioned follow-up paper.
Venkatesh's proof combined the quantitative equidistribution result
\[\left|\frac{1}{T}\int_{0}^{T}f(\xi h(t))\;dt-\int f\;d\mu_{X}\right|\ll_{f}T^ {-2\epsilon}\]
for \(f\in C^{\infty}(X)\) (see Lemma 9.4 in [12]; the ideas go back to Ratner) with a trick to bound the Fourier coefficients. With an argument as in the proof of Proposition 5.1, the theorem of Venkatesh immediately implies Theorem 1.1 for cocompact \(\Gamma\).
Strombergsson proved an effective equidistribution theorem for the
horocycle flow in the non-compact case [11]. Strombergsson showed that
\[\left|\frac{1}{T}\int_{0}^{T}f(\xi h(t))\ dt-\int f\ d\mu_{X}\right|\ll_{f}r^{- \alpha}, \tag{1}\]
where \(\alpha\) only depends on \(\Gamma\) and \(r\) is as in Theorem 1.1.
This relates the asymptotics of the horocycle flow at time \(T\) to the location of \(g_{\log T}(\xi)\); the further \(g_{\log T}(\xi)\) is up in some cusp, the worse the asymptotics. Strombergsson's result can be combined with the method of Venkatesh to prove the theorem below, as done for example by Zheng ([13], Theorem 1.2).
**Theorem 1.4**.: _Let \(\Gamma\) be a non-compact lattice in \(G\). Let \(f\in C^{4}(X)\) with \(\|f\|<\infty\) and \(1\leq s<T\). Then_
\[\left|\frac{s}{T}\sum_{1\leq j\leq\mathbb{T}/s}f(\xi h(sj))-\int f\ d\mu_{X} \right|\ll s^{\frac{1}{2}}r^{-\frac{\beta}{2}}\|f\|\]
_for any initial point \(\xi\in X\), where \(r=T\exp(-\mathrm{dist}(g_{\log T}(\xi)))\). The parameter \(\frac{1}{6}>\beta>0\) and the implied constant depend only on \(\Gamma\)._
If \(r\gg T^{\epsilon}\) for some \(\epsilon>0\), the situation is very similar to the compact case. The set of points \(\xi\) with \(r\gg T^{\epsilon}\) for all \(T\) has full measure (by geodesic excursion rates, compare the introduction of [11]). Thus, if one restricts the analysis to a subset of initial points of full measure, as done by Zheng in [13], Theorem 1.4 is all that is needed to show equidistribution of \(n^{1+\gamma}\). Similarly, McAdam shows density of almost primes in \(SL_{n}(\mathbb{Z})\backslash SL_{n}(\mathbb{R})\) under a Diophantine condition which is equivalent to \(r\gg T^{\epsilon}\) in two dimensions [8]; compare Remark 4.3.
Statements about density are not hard to get from Theorem 1.4, because any non-periodic \(\xi\) has a sequence \(T_{i}\to\infty\) such that \(r(T_{i})\gg T_{i}\). This holds because \(g_{\log T}(\xi)\) returns to a compact set infinitely often (compare Lemma 4.2). Explicitly, one immediately gets density of \(\xi h(n^{1+\gamma})\) in \(X\), density of almost primes in \(X\) and density of primes in a set of positive measure (shown as in the proof of Proposition 5.1) from Theorem 1.4.
Sarnak and Ubis chose a different approach and analyzed the quantitative equidistribution of \(\xi h(sn)\), for _all_\(\xi\) and _all_ times \(T\)[9]. They did this in the case \(\Gamma=PSL_{2}(\mathbb{Z})\), defining a fundamental period \(y_{T}\). This fundamental period is based on the imaginary part of the horocycle segment \(\xi h([0,T])\) and turns out to be closely related to \(r\). They then proceeded to show that the horocycle segment \(\xi h(t),0\leq t\leq T\) is approximable by periodic horocycle with period at most \(y_{T}\). Analyzing the situation on the periodic horocycle separately, they deduced that
for non-negative \(f\in C^{4}\)
\[\frac{1}{\pi(T)}\sum_{p\leq T}f(\xi h(p))\leq 10\int f\;d\mu_{X}+o_{T}(1)\]
for all \(T\), which implies non-concentration of primes. They did not use Strombergsson's result and Theorem 1.4, but used estimates of automorphic forms to obtain similar asymptotics for \(r\gg T^{\epsilon}\).
_Strategy._ We will combine Theorem 1.4 with the approach of Sarnak and Ubis to generalize their result to all lattices \(\Gamma\subset G\). The main step is to generalize their fundamental period from \(\Gamma=PSL_{2}(\mathbb{Z})\) to arbitrary \(\Gamma\). This approach culminates in Lemma 1.3. This theorem will allow us to reduce the analysis to closed horocycles in the cases when \(r\ll T^{\frac{1}{20}}\) and the asymptotics are bad. On those, we will use the Siegel-Walfisz type Theorem 2.4 for \(\nu\) to finish the proof.
_Structure of this paper._ Chapter 2 contains the proof of the Siegel-Walfisz theorem for \(\nu\) and ends with a short proof of Corollary 1.2 assuming Theorem 1.1. Chapter 3 recalls basics of the dynamics on quotients of \(G\) and their relation to quotients of the hyperbolic plane. In Chapter 4, Lemma 1.3 is proven by generalizing the fundamental period \(y_{T}\) to all lattices \(\Gamma\), establishing \(r\sim y_{T}^{-1}\) and analyzing horocycle segments in \(PSL_{2}(\mathbb{R})\).
In Chapter 5, Theorem 1.4, Lemma 1.3 and Theorem 2.4 are combined to prove Theorem 1.1.
_Notation._ Elements and flows in \(G=PSL_{2}(\mathbb{R})\) are denoted by
\[h(x)=\begin{pmatrix}1&x\\ 0&1\end{pmatrix},\quad a(y)=\begin{pmatrix}y^{\frac{1}{2}}&0\\ 0&y^{-\frac{1}{2}}\end{pmatrix},\quad k(\theta)=\begin{pmatrix}\cos\theta& \sin\theta\\ -\sin\theta&\cos\theta\end{pmatrix},\]
\(g_{t}(g)=ga(e^{t})\) and \(h_{t}(g)=gh(t)\). We set \(e(t):=\exp(2\pi it)\).
A fundamental domain of \(X=\Gamma\backslash G\) is chosen to be the unit tangent bundle \(E=T_{1}F\) of a connected fundamental domain \(F\) of the action of \(\Gamma\) on the upper half plane \(\mathbb{H}\). The open interior is denoted by \(F^{\rm o}\) and the closure by \(\overline{F}\). For \(g\in G\), \(g.z=\frac{az+b}{cz+d}\), and \(g.i\) is the projection to \(\mathbb{H}\). When we write \(\operatorname{Im}(g)\), we mean \(\operatorname{Im}(g.i)\).
The objects \(r\), \(\operatorname{dist}(\xi)\) and the norm \(\|f\|\) of \(f\in C^{4}(X)\) are defined before Theorem 1.1. The definition of \(y_{T}\) can be found in Chapter 4 on Page 14.
The inequality \(f\ll g\) and \(f=O(g)\) mean that there is an absolute constant \(C\) such that \(|f(t)|\leq C|g(t)|\) for all \(t\). Write \(f\sim g\) if \(f\ll g\) and \(g\ll f\). Unless equipped with an index to indicate further dependence, the implicit constants only depend on \(\Gamma\).
The divisor function \(\tau(n)\) counts the divisors of \(n\) and the Euler totient
function is denoted by \(\phi(n)\). The Mobius function \(\mu\) is supported on square free numbers and is defined by
\[\mu(1)=1,\quad\mu(p_{1}\dots p_{n})=(-1)^{n}\]
for distinct prime numbers \(p_{1},\dots,p_{n}\). Its square \(\mu^{2}\) is the characteristic function of square free numbers. For integers \(e\) and \(d\), the least common multiple is denoted by \([e,d]\) and the greatest common divisor by\((e,d)\).
_Acknowledgements._ This text was written as my master's thesis at the Hebrew University in Jerusalem. I am very grateful for the warm welcome, the superb learning environment and the generous support I received at the Hebrew University, especially from Hillel Furstenberg, Elon Lindenstrauss, Shahar Mozes, Jane Turner and my advisor Tamar Ziegler.
Initially, I came to Jerusalem in an exchange year. After an amazing year, I decided to finish my entire degree in Jerusalem. Many thanks to the entire department and especially to Elon and Tamar, who organized support me in my second year. I am thankful for the many discussions with Tamar about the project and to Elon's open ear for questions whenever I had any.
One of the most astonishing experiences in this year for me was a 1on1 reading course with Hillel about his proof of Szemeredi's theorem. I learned a lot from his explanations of the proof and his views on the life of a mathematician in general, patiently delivered in his small office amidst piles of books and theses, veiled in layers of chalk dust. I am very happy that I came to a university where even the most senior researchers (Hillel was 83 at the time) still come to their offices and gladly teach young students.
A big thank you to the anonymous referee, whose review improved this paper tremendously. The suggested tweaks to the proofs made the ideas much clearer and saved close to 10 pages of calculation. If the reader finds the proofs intuitive, the changes made after the review have a substantial part in that.
Moreover, the review did not only help the reader. It also made the ideas clearer to me and got me thinking about the material again. This led me to a proof of equidistribution of \(ph(n^{1+\gamma})\), which will be the subject of a follow-up paper. Without the very helpful review, this would in all likelihood not have happened.
Lastly, I want to thank Dan Goldston, Andreas Strombergsson, Peter Sarnak and especially Adrian Ubis for their helpful replies to questions I asked.
## 2. Properties of \(\nu\)
In this section, we are going to derive the Siegel-Walfisz type Theorem 2.4 for \(\nu\) and prove that Theorem 1.1 implies Corollary 1.2.
**Lemma 2.1**.: _(Lemma 2.1 in [4]) Let \(R>0,k\in\mathbb{N}\) such that \(\log(k)\ll\log(R)\). Then_
\[\sum_{d\leq R,(d,k)=1}\frac{\mu(d)}{d}\log\left(\frac{R}{d}\right)=\frac{k}{ \phi(k)}+O\left(\frac{1}{\log^{3}(R)}\right)\]
_and_
\[\sum_{d\leq R,(d,k)=1}\frac{\mu(d)}{\phi(d)}\log\left(\frac{R}{d}\right)= \mathfrak{S}_{2}(k)+O\left(\frac{1}{\log^{3}(R)}\right),\]
_where \(\mathfrak{S}_{2}\) is the singular series from the Goldbach conjecture, supported on positive even numbers and given by \(\mathfrak{S}_{2}(2n)=2C_{2}\prod_{p|n,p>2}\left(\frac{p-1}{p-2}\right)\) with \(C_{2}=\prod_{p>2}\left(1-\frac{1}{(p-1)^{2}}\right)\)._
The next lemma from [4] is cited here only in the case \(j=1\) with simplified error terms. The validity of the simplifications can be seen from their remarks after Lemma 2.2 and the fact that \(\mathfrak{S}_{2}(k)\ll\tau(k)\).
**Lemma 2.2**.: _(Lemma 2.4 in [4]) Let \(R\geq 1\) and \(k\in\mathbb{N}\) such that \(\log k\ll\log R\). Then_
\[\sum_{d\leq R,(d,k)=1}\frac{\mu^{2}(d)}{\phi(d)}\mathfrak{S}_{2}(dk)=\log(R)+ O\left(\log\log 3k\right).\]
These lemmas can be combined in a similar way to the proof of Theorem 5.1 in [4] to yield the following proposition.
**Proposition 2.3**.: _Let \(I\) be an interval in \(\mathbb{N}\) and \(R>1\). Let \(q\in\mathbb{N}\) such that \(\log q\ll\log R\) and let \(j\leq q\) be coprime to \(q\). Then_
\[\frac{1}{|I|}\sum_{n\in I}\Lambda_{R}^{2}(qn+j)=\frac{q}{\phi(q)}\log R+\frac {q}{\phi(q)}O\left(\log\log R\right)+O\left(\frac{R^{2}}{|I|}\right).\]
Proof.: Note that for any integer \(k\),
\[\sum_{\begin{subarray}{c}n\in I\\ k|qn+j\end{subarray}}1=\begin{cases}\frac{|I|}{k}+O(1),\ \ (q,k)=1\\ 0,\ \ \text{else}\end{cases}.\]
To bound the sums of the appearing error terms, we record for later use that
\[\sum_{k\leq R}\frac{1}{k\log\left(R/k\right)}\leq\sum_{j\leq\log R}\sum_{ \begin{subarray}{c}R\\ 2^{j+1}\end{subarray}}\sum_{k\leq\frac{R}{2^{j}}}\frac{1}{k\log\left(2^{j} \right)}=O(\log\log R) \tag{2}\]
and, similarly, using the well known bound \(\frac{1}{\phi(k)}\ll\frac{\log\log k}{k}\),
\[\sum_{k\leq R}\frac{1}{\phi(k)\log^{2}\left(R/k\right)}=O(\log\log R). \tag{3}\]
So let us bound the terms. Unboxing the definition, we see that
\[\sum_{n\in I}\Lambda_{R}^{2}(qn+j) =\sum_{d,e\leq R}\mu(d)\mu(e)\log\left(\frac{R}{d}\right)\log \left(\frac{R}{e}\right)\sum_{\begin{subarray}{c}n\in I\\ d,e|qn+j\end{subarray}}1\] \[=O(R^{2})+|I|\sum_{\begin{subarray}{c}d,e\leq R\\ (de,q)=1\end{subarray}}\frac{\mu(d)\mu(e)\log\left(\frac{R}{d}\right)\log \left(\frac{R}{e}\right)}{[d,e]}\]
where \([d,e]\) is the least common multiple. Denote by \(\sum^{\prime}\) a sum in which all summation variables are coprime to \(q\) and to each other. We find by using Lemma 2.1 that
\[\sum_{\begin{subarray}{c}d,e\leq R\\ (de,q)=1\end{subarray}}\frac{\mu(d)\mu(e)\log\left(\frac{R}{d}\right)\log \left(\frac{R}{e}\right)}{[d,e]}\] \[=\sum_{\begin{subarray}{c}m\leq R\\ md,me\leq R\end{subarray}}^{\prime}\frac{\mu^{2}(m)}{m}\frac{\mu(d)}{d}\frac{ \mu(e)}{e}\log\left(\frac{R}{md}\right)\log\left(\frac{R}{me}\right)\] \[=\sum_{md\leq R}^{\prime}\frac{\mu^{2}(m)}{m}\frac{\mu(d)}{d}\log \left(\frac{R}{md}\right)\sum_{\begin{subarray}{c}e\leq R/m\\ (e,mdq)=1\end{subarray}}\frac{\mu(e)}{e}\log\left(\frac{R/m}{e}\right)\] \[=\frac{q}{\phi(q)}\left(\sum_{md\leq R}^{\prime}\frac{\mu^{2}(m) }{\phi(m)}\frac{\mu(d)}{\phi(d)}\log\left(\frac{R}{md}\right)\right)+O(\log \log R),\]
where (2) was used to bound the error sum coming from Lemma 2.1. Applying Lemma 2.1, to the main term, we find
\[\sum_{md\leq R}^{\prime}\frac{\mu^{2}(m)}{\phi(m)}\frac{\mu(d)}{ \phi(d)}\log\left(\frac{R}{md}\right)=\sum_{\begin{subarray}{c}m\leq R\\ (m,q)=1\end{subarray}}\frac{\mu^{2}(m)}{\phi(m)}\sum_{\begin{subarray}{c}d \leq R/m\\ (d,mq)=1\end{subarray}}\frac{\mu(d)}{\phi(d)}\log\left(\frac{R/m}{d}\right)\] \[=\sum_{\begin{subarray}{c}m\leq R\\ (m,q)=1\end{subarray}}\frac{\mu^{2}(m)}{\phi(m)}\mathfrak{S}_{2}(mq)+O\left( \log\log R\right),\]
where the error term was bounded with the help of (3). Applying Lemma 2.2 finishes the proof.
**Theorem 2.4** (Siegel-Walfisz for \(\nu\)).: _Let \(N\in\mathbb{N}\) and let \(\nu(n)=\frac{\Lambda_{R}^{2}(n)}{\log R}\) with \(R\) of size \(N^{\epsilon}\). For any \(q\in\mathbb{N}\), any interval \(I\subset[0,N]\) of size
\(|I|\geq qR^{3}\) and any \(q\)-periodic function \(f\),_
\[\frac{1}{|I|}\sum_{n\in I}f(n)\nu(n)=\frac{q}{\phi(q)|I|}\sum_{ \begin{subarray}{c}n\in I\\ (n,q)=1\end{subarray}}f(n)+O\left(\frac{\|f\|_{\infty}\log\log R}{\log R}\right),\]
_where \(\|f\|_{\infty}=\max_{r\leq q}|f(r)|\). In particular,_
\[\frac{1}{N}\sum_{n\leq N}\nu(n)=1+O\left(\frac{\log\log R}{\log R}\right)\]
Proof.: Without loss of generality, assume that \(\|f\|_{\infty}=1\). Fix an interval \(I\). For \(q=1\), by Proposition 2.3,
\[\frac{1}{|I|}\sum_{n\in I}\nu(n)=1+O\left(\frac{\log\log R}{\log R}\right)+O \left(\frac{1}{R}\right).\]
For general \(q\leq N\) and some coprime \(j<q\), Proposition 2.3 applied to \(q^{-1}(I-j)\) (the assumption \(\log q\ll\log R\) is satisfied because \(R\) is of size \(N^{\varepsilon}\)) implies that
\[\frac{q}{|I|}\sum_{\begin{subarray}{c}n\in I\\ n\equiv j(q)\end{subarray}}\nu(n)=\frac{q}{\phi(q)}+\frac{q}{\phi(q)}O\left( \frac{\log\log R}{\log R}\right)+O\left(\frac{1}{R}\right).\]
In particular, all residue classes coprime to \(q\) contribute the same amount to the sum; it is thus only left to check that the residue classes which are not coprime have negligible contribution. To see this, average over all residue classes \(j\) coprime to \(q\), giving
\[\frac{1}{|I|}\sum_{\begin{subarray}{c}n\in I\\ (n,q)=1\end{subarray}}\nu(n)=1+O\left(\frac{\log\log R}{\log R}\right)+O\left( \frac{1}{R}\right),\]
and compare with the contribution of all residue classes above.
To conclude, we will prove Corollary 1.2 assuming Theorem 1.1.
Proof.: By partial summation and the prime number theorem,
\[\frac{1}{\pi(T)}\sum_{p\leq T}f(\xi h(p))=\frac{1}{T}\sum_{n\leq T }f(\xi h(n))\ \tilde{\Lambda}(n)+O\left(\frac{\|f\|_{\infty}}{\log T}\right),\]
where \(\tilde{\Lambda}\) is given by
\[\tilde{\Lambda}(n)=\begin{cases}\log(p),\ n=p\ \text{prime}\\ 0,\ \text{else}\end{cases}.\]
By definition of \(\nu\), for all primes \(p>R\)
\[\nu(p)=\log R=\frac{1}{\theta}\log T.\]
In particular, \(\tilde{\Lambda}(n)\leq\frac{1}{\theta}\nu(n)\) on the interval \([T^{\theta},T]\). The corollary follows from Theorem 1.1.
## 3. Basics of the Dynamics on Quotients of the Hyperbolic Plane
In this section, we recall some basics of the hyperbolic plane. Any matrix \(g\in G\) can be uniquely written as
\[g=h(x)a(y)k(\theta)\]
for \(x\in\mathbb{R},y>0,\theta\in[-\nicefrac{{\pi}}{{2}},\nicefrac{{\pi}}{{2}})\); this is the Iwasawa parametrization. Moreover, \(G\) has a natural left-invariant metric given by
\[d_{G}t^{2}=\frac{dx^{2}+dy^{2}}{y^{2}}+d\theta^{2}.\]
The upper half plane \(\mathbb{H}\) carries the hyperbolic metric, which is invariant under the action of \(G\) via Mobius transformations \(z\mapsto g.z\). This action lifts to an action on the unit tangent bundle \(T_{1}\mathbb{H}\) given by
\[\begin{pmatrix}a&b\\ c&d\end{pmatrix}.(z,v)=\left(\frac{az+b}{cz+d},\frac{v}{(cz+d)^{2}}\right).\]
There is a natural bijection \(PSL_{2}(\mathbb{R})\to T_{1}\mathbb{H}\) given by
\[h(x)a(y)k(\theta)\mapsto\left(x+iy,e^{i2\theta}\right)\]
which is isometric with respect to the metric on \(T_{1}\mathbb{H}\) induced by the hyperbolic metric on \(\mathbb{H}\). The measure
\[\mu_{G}=\frac{dxdy}{y^{2}}\frac{d\theta}{\pi}\]
is a Haar measure on \(G\) (\(G\) is unimodular, so there is no distinction between a left and right Haar measure).
Fix a lattice \(\Gamma\) in \(G\) and set \(X:=\Gamma\backslash G\). This space carries a left-invariant metric
\[d_{X}(\Gamma g_{1},\Gamma g_{2})=\min_{\gamma\in\Gamma}d_{G}(g_{1},\gamma g_{2})\]
and a finite measure \(\mu:=\mu_{X}=\pi_{\#}\mu_{G}\), the push forward of \(\mu_{G}\) under the projection to \(X\). Write \(p.i\) for \(\Gamma g.i\in\Gamma\backslash\mathbb{H}\) with \(p=\Gamma g\in X\).
The fundamental domain \(E\) of \(X\) in \(G\) can be chosen to be the tangent bundle of a convex hyperbolic polygon \(F\subset\mathbb{H}\) (i. e. a polygon in which each edge is a geodesic) with finitely many edges and vertices; this \(F\) is a fundamental domain for \(\Gamma\backslash\mathbb{H}\). Explicitly, one can choose \(F\) to be a Dirichlet domain for any point \(z_{0}\in\mathbb{H}\); that is, the interior of \(F\) is given by
\[F^{\circ}=D(z)=\{z\in\mathbb{H}|d_{G}(z,z_{0})<d_{G}(z,\gamma z_{0})\;\forall \gamma\in\Gamma\backslash\{\mathrm{id}\}\}.\]
The polygon \(F\) might or might not have boundary vertices, i. e. point of adjacency with \(\partial\mathbb{H}=\mathbb{R}\cup\infty\). \(X\) is compact if and only if there are none of those. If there are some, the equivalence class of each boundary vertex with respect to \(\Gamma\) is called a cusp of \(X\) (for example if \(\Gamma=PSL_{2}(\mathbb{Z})\), \(X\) has the single cusp \(\infty\) which is equivalent to every
rational number). For each representative \(r_{i}\in\partial\mathbb{H}\) of each cusp \(\Gamma r_{i}\) there is a fundamental domain \(F\) such that all other boundary vertices of \(F\) are inequivalent to \(r_{i}\); this is because we can take some point far up in the cusp as basic point for the Dirichlet domain. Proofs for all of these statements can be found in chapter 11 of [2].
There are two important flows on \(G\). The first one is the geodesic flow given by
\[g_{t}(g)=ga(e^{t})=\begin{pmatrix}ae^{\frac{t}{2}}&be^{-\frac{t}{2}}\\ ce^{\frac{t}{2}}&de^{-\frac{t}{2}}\end{pmatrix}\]
and the second one is the horocycle flow given by
\[h_{t}(g)=gh(t)=\begin{pmatrix}a&b+at\\ c&d+ct\end{pmatrix}\]
where \(g=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\). The flows are well-defined on \(X\) because \(\Gamma\) acts from the left and the flows act from the right. On \(G\) the behaviour of these flows is not very interesting, but on \(X\) it is. As outlined in the introduction, the dynamics with respect to the horocycle flow exhibit a very rigid behaviour.
There are no periodic horocycle orbits in \(X\) if and only if \(X\) is compact. If there are periodic orbits, their structure is as follows:
**Lemma 3.1**.: _(Lemma 11.29 in [2]) Let \(\Gamma\) be a lattice such that \(X\) is non-compact. To every cusp of \(X\) corresponds exactly one one-parameter family of \(h\)-periodic orbits parametrized by \(g_{t}\); in explanation, if \(p\in X\) is such that \(t\mapsto ph(t)\) is periodic, then \(g_{t}(p)\) converges to some cusp of \(X\) as \(t\to\infty\) and all other periodic orbits associated to this cusp contain exactly one element \(g_{t}(p)\) for \(t\in\mathbb{R}\). Furthermore, the orbit \(t\mapsto ph(t)\) is periodic if and only if \(g_{t}(p)\to\infty\), in the sense that \(g_{t}(p)\) leaves any compact subset of \(X\) permanently._
It is shown in the proof (or alternatively, can be seen directly from the statement) that for any boundary vertex of \(F\) there is a \(\gamma\in\Gamma\) conjugated to \(\begin{pmatrix}1&1\\ &1\end{pmatrix}\) which fixes this boundary vertex and generates the subgroup of \(\Gamma\) fixing the vertex. This \(\gamma\) is precisely the one leading to the periodicity of the corresponding orbits. For example for \(\Gamma=PSL_{2}(\mathbb{Z})\) and the sole boundary vertex \(\infty\), this matrix is \(\gamma=\begin{pmatrix}1&1\\ &1\end{pmatrix}\).
If \(p\) is periodic with period \(y\), then \(g_{t}(p)\) is periodic with period \(ye^{-t}\) because of the equation \(g_{t}\circ h_{s}=h_{e^{-t}s}\circ g_{t}\).
## 4. Approximation by closed horocycles
In this chapter, we will define the fundamental period of a horocycle piece (first defined for \(\Gamma=PSL_{2}(\mathbb{Z})\) in [9]) and explore the connection to effective equidistribution. The ultimate goal of this section is to prove Lemma 1.3.
Let \(n\) be the number of cusps of \(X\). Let \(r_{i}\in\partial\mathbb{H}\) be a representative of one cusp of \(X\) and let \(\gamma_{i}\in\Gamma\) be the corresponding unipotent element fixing \(r_{i}\) and inducing the periodicity of the corresponding horocycle, as in the discussion after Lemma 3.1. Let \(\sigma_{i}\in G\) such that \(\sigma_{i}\gamma_{i}\sigma_{i}^{-1}=h(1)\) and \(\sigma_{i}.r_{i}=\infty\). Explicitly, this \(\sigma_{i}\) consists of a rotation matrix, rotating \(r_{i}\) to \(\infty\) and \(\gamma_{i}\) to some \(h(t_{i})\), and some diagonal element to normalize \(t_{i}=1\).
Define \(y_{i}:G\to\mathbb{R}_{+}\) by
\[y_{i}(g)=\operatorname{Im}(\sigma_{i}g)\]
where we mean in slight abuse of notation the imaginary part of the corresponding basepoint \(\sigma_{i}g.i\) in \(\mathbb{H}\). Note that \(y_{i}\) is only well-defined on \(G\), not on \(X\), and depends on the representative of the cusp.
In the natural parametrization \(h(x)a(y).i\) for \(\mathbb{H}\), every level set with fixed \(y\) is a horizontal line, so a circle with point of tangency \(\infty\) which is invariant under \(h(1)\). We could also parametrize \(\mathbb{H}\) by \(\sigma_{i}^{-1}h(x)a(y).i\), in terms of which any level set is a circle with point of tangency \(r_{i}\) invariant under \(\gamma_{i}\). \(y_{i}(z)\) then is the \(y\)-component of \(z\) in terms of this new parametrization.
Let for \(T>0\)
\[Y_{i}^{T}(g)=\min\left\{y_{i}(gh(t))\middle|0\leq t\leq T\right\}=\min(y_{i}(g),y_{i}(gh(T)))\]
where the second equality follows from the fact that \(\{\sigma_{i}gh(t)|0\leq t\leq T\}\) is a piece of a horocycle orbit and thus a segment of a circle in \(\mathbb{H}\). Define \(y_{i}^{T}:X\to\mathbb{R}_{+}\) by
\[y_{i}^{T}(\Gamma g)=\sup_{\gamma\in\Gamma}Y_{i}^{T}(\gamma g).\]
The supremum is finite and attained for some \(\gamma\) because \(\Gamma\) is discrete and acts properly discontinuous on \(X\). It is independent of the representative of the cusp. Note that \(y_{i}^{0}:\Gamma\backslash\mathbb{H}\to\mathbb{R}_{+}\), i. e. \(y_{i}^{0}\) depends only on the base point \(\Gamma g.i\) of \(\Gamma g\).
Lastly, define
\[y_{T}(p)=\max_{1\leq i\leq n}[y_{i}^{T}(p)].\]
The horocycle piece up to time \(T\) starting in \(p\) will be close to a periodic horocycle with period \(y_{T}^{-1}\). This is called the textitfundamental period at time \(T\), whose properties we will explore now.
**Lemma 4.1** (Parametrization in the Cusps).: _1. For some small \(\epsilon>0\), each \(C_{i}:=\{p|y_{i}^{0}(p)^{-1}<\epsilon\}\subset X\) is an open neighborhood of the cusp \(\Gamma r_{i}\) and all \(C_{i}\) are pairwise disjoint. The set \(K:=X\backslash\bigcup C_{i}\) is compact. 2. Any point \(p\) is contained in \(C_{i}\) if and only if there is a \(q\) with the same base point (\(p.i=q.i\) in \(\Gamma\backslash\mathbb{H}\)) such that \(qh(t)\) is periodic with period smaller than \(\epsilon\). In this case, \(qh(t)\) has period \(y_{i}^{0}(p)^{-1}\). 3. Fix disjoint \(C_{i}\) and \(K\) as in 1. and take some \(p_{0}\in K\). Then_
\[\exp(d_{X}(p,p_{0}))\sim y_{0}(p)=\max_{1\leq i\leq n}[y_{i}^{0}(p)].\]
_More explicitly, if \(p\in C_{i}\), then \(\exp(d_{X}(p,p_{0}))\sim y_{i}^{0}(p)\) and if \(p\in K\), \(\exp(d_{X}(p,p_{0}))\sim 1\sim y_{0}(p)\)._
The implied constants depend on the choice of \(\epsilon\), but \(C_{i}\) and \(K\) will be fixed from here on. Parts of the lemma have appeared in the literature before; the function \(y_{0}(p)\) is known as the invariant height function, compare (11) in [11]. Part 3. of Lemma 4.1 was stated as Inequality (14) in the same paper paper, leaving the proof as an exercise. We will prove it here for completeness.
Proof.: As discussed in chapter 3, we can choose a fundamental domain \(E\) such that \(E=T_{1}F\), where the open interior \(F^{\rm o}\) is a Dirichlet domain with boundary vertex \(r_{i}\) such that all other boundary vertices of \(F\) are inequivalent to \(r_{i}\). Then \(\sigma_{i}F\) is a fundamental domain of \(\sigma_{i}\Gamma\sigma_{i}^{-1}\backslash\mathbb{H}\) with boundary vertex \(\infty\) which is inequivalent to all other boundary vertices.
\(\sigma_{i}F\) is a hyperbolic polygon with finitely many sides. Because \(\infty\) is a vertex, two of them must be straight vertical lines. Thus for some big \(B\in\mathbb{R}\), \(\sigma_{i}F\cap\{z|{\rm Im}(z)>B\}\) is a rectangle. Because there are no other
equivalent boundary vertices and \(h(1)\in\sigma_{i}\Gamma\sigma_{i}^{-1}\) by choice of \(\sigma_{i}\), the horizontal line of this rectangle has euclidean length \(1\) and
\[\{z|\text{Im}(z)>B\}=\bigcup_{k\in\mathbb{Z}}h(k)\left(\sigma_{i}F\cap\{z|\text{ Im}(z)>B\}\right).\]
1. Set \(D_{i}=\Gamma\sigma_{i}^{-1}\{z\in\sigma_{i}F\ |\text{Im}(z)>B\}\) and \(C_{i}=T_{1}D_{i}\subset X\). Let \(g\in E\). If \(\Gamma g\in C_{i}\), by definition \(y_{i}^{0}(\Gamma g)\geq y_{i}(g)=\text{Im}(\sigma_{i}g)>B\). If on the other hand \(\Gamma g\notin C_{i}\), because the left translates \(h(k)\{z\in\sigma_{i}F\ |\text{Im}(z)>B\}\) exactly tile the set \(\{z|\text{Im}(z)>B\}\), we must have \(y_{i}(\gamma g)\leq B\) for all \(\gamma\in\Gamma\) (else \(g\) would have two different representatives in the fundamental domain). Thus \(y_{i}^{0}(g)\leq B\). This shows that \(C_{i}=\{p|y_{i}^{0}(p)^{-1}<\epsilon_{i}\}\) with \(\epsilon_{i}=B^{-1}\).
2. Let \(p\) now be periodic with period \(b<\epsilon\) and let \(g\) be the representative of \(p\) in \(E\). Then \(\gamma_{i}g=gh(b)\) by our choice of \(\gamma_{i}\). The orbit of \(\sigma_{i}g\) is periodic with respect to infinity in \(\sigma\Gamma\sigma^{-1}\backslash G\) because
\[h(1)\sigma_{i}g=\sigma_{i}\gamma_{i}\sigma_{i}^{-1}\sigma_{i}g=\sigma_{i}gh(b),\]
But the orbit \(U=\{h(x)a(b^{-1})|0\leq x\leq 1\}\) is also in \(\sigma_{i}E\) and is periodic with period \(b\), so by Lemma 3.1 they have to agree. Because \(b<\epsilon\), we get \(\sigma_{i}p\in U\subset\sigma_{i}C_{i}\) and \(y_{i}^{0}(p)=b^{-1}\).
If on the other hand \(p=\Gamma g\in C_{i}\) such that \(g\in E\), then \(\sigma_{i}g=(z,v)\) for some \(z\) with \(\text{Im}(z)>\epsilon^{-1}\). Set \(q=\sigma_{i}^{-1}(z,i)\) and note that \(qh(t)\) is periodic with period \(\text{Im}(z)^{-1}=y_{i}^{0}(p)^{-1}\) because \(t\mapsto\sigma_{i}qh(t)\) is. There cannot be a \(q^{\prime}\) with \(q^{\prime}.i=p.i\) and a smaller period, because \(\sigma_{i}q^{\prime}.i\) would then have to be at a different level in the fundamental domain \(\sigma_{i}F\).
3. If \(p\in K\), \(\exp(d_{X}(p,p_{0}))\sim 1\) because \(K\) is compact. On the other hand \(p=\Gamma g\in C_{i}\) such that \(g\in E\), then \(\sigma_{i}g=(z,v)\) for some \(z\) with \(\text{Im}(z)>\epsilon^{-1}\). Set \(q=\sigma_{i}^{-1}(z,i)\) and note that \(qh(t)\) is periodic with period \(\text{Im}(z)^{-1}=y_{i}^{0}(p)^{-1}\) because \(t\mapsto\sigma_{i}qh(t)\) is. There cannot be a \(q^{\prime}\) with \(q^{\prime}.i=p.i\) and a smaller period, because \(\sigma_{i}q^{\prime}.i\) would then have to be at a different level in the fundamental domain \(\sigma_{i}F\).
4. If \(p\in K\), \(\exp(d_{X}(p,p_{0}))\sim 1\) because \(K\) is compact. On the other hand \(p=\Gamma g\in C_{i}\) such that \(g\in E\), then \(\sigma_{i}g=(z,v)\) for some \(z\) with \(\text{Im}(z)>\epsilon^{-1}\). Set \(q=\sigma_{i}^{-1}(z,i)\) and note that \(qh(t)\) is periodic with period \(\text{Im}(z)^{-1}=y_{i}^{0}(p)^{-1}\) because \(t\mapsto\sigma_{i}qh(t)\) is. There cannot be a \(q^{\prime}\) with \(q^{\prime}.i=p.i\) and a smaller period, because \(\sigma_{i}q^{\prime}.i\) would then have to be at a different level in the fundamental domain \(\sigma_{i}F\).
5. If \(p\in K\), \(\exp(d_{X}(p,p_{0}))\sim 1\) because \(K\) is compact. On the other hand \(p=\Gamma g\in C_{i}\) such that \(g\in E\), then \(\sigma_{i}g=(z,v)\) for some \(z\) with \(\text{Im}(z)>\epsilon^{-1}\). Set \(q=\sigma_{i}^{-1}(z,i)\) and note that \(qh(t)\) is periodic with period \(\text{Im}(z)^{-1}=y_{i}^{0}(p)^{-1}\) because \(t\mapsto\sigma_{i}qh(t)\) is. There cannot be a \(q^{\prime}\) with \(q^{\prime}.i=p.i\) and a smaller period, because \(\sigma_{i}q^{\prime}.i\) would then have to be at a different level in the fundamental domain \(\sigma_{i}F\).
6. If \(p\in K\), \(\exp(d_{X}(p,p_{0}))\sim 1\) because \(K\) is compact. On the other hand \(p=\Gamma g\in C_{i}\) such that \(g\in E\), then \(\sigma_{i}g=(z,v)\) for some \(z\) with \(\text{Im}(z)>\epsilon^{-1}\). Set \(q=\sigma_{i}^{-1}(z,i)\) and note that \(qh(t)\) is periodic with period \(\text{Im}(z)^{-1}=y_{i}^{0}(p)^{-1}\) because \(t\mapsto\sigma_{i}qh(t)\) is. There cannot be a \(q^{\prime}\) with \(q^{\prime}.i=p.i\) and a smaller period, because \(\sigma_{i}q^{\prime}.i\) would then have to be at a different level in the fundamental domain \(\sigma_{i}F\).
7. If \(p\in K\), \(\exp(d_{X}(p,p_{0}))\sim 1\) because \(K\) is compact. On the other hand \(p=\Gamma g\in C_{i}\) such that \(g\in E\), then \(\sigma_{i}g=(z,v)\) for some \(z\) with \(\text{Im}(z)>\epsilon^{-1}\). Set \(q=\sigma_{i}^{-1}(z,i)\) and note that \(qh(t)\) is periodic with period \(\text{Im}(z)^{-1}=y_{i}^{0}(p)^{-1}\) because \(t\mapsto\sigma_{i}qh(t)\) is. There cannot be a \(q^{\prime}\) with \(q^{\prime}.i=p.i\) and a smaller period, because \(\sigma_{i}q^{\prime}.i\) would then have to be at a different level in the fundamental domain \(\sigma_{i}F\).
8. If \(p\in K\), \(\exp(d_{X}(p,p_{0}))\sim 1\) because \(K\) is compact. On the other hand \(p=\Gamma g\in C_{i}\) such that \(g\in E\), then \(\sigma_{i}g=(z,v)\) for some \(z\) with \(\text{Im}(z)>\epsilon^{-1}\). Set \(q=\sigma_{i}^{-1}(z,i)\) and note that \(qh(t)\) is periodic with period \(\text{Im}(z)^{-1}=y_{i}^{0}(p)^{-1}\) because \(t\mapsto\sigma_{i}qh(t)\) is. There cannot be a \(q^{\prime}\) with \(q^{\prime}.i=p.i\) and a smaller period, because \(\sigma_{i}q^{\prime}.i\) would then have to be at a different level in the fundamental domain \(\sigma_{i}F\).
9. If \(p\in K\), \(\exp(d_{X}(p,p_{0}))\sim 1\) because \(K\) is compact. On the other hand \(p=\Gamma g\in C_{i}\) such that \(g\in E\), then \(\sigma_{i}g=(z,v)\) for some \(z\) with \(\text{Im}(z)>\epsilon^{-1}\). Set \(q=\sigma_{i}^{-1}(z,i)\) and note that \(qh(t)\) is periodic with period \(\text{Im}(z)^{-1}=y_{i}^{0}(p)^{-1}\) because \(t\mapsto\sigma_{i}qh(t)\) is. There cannot be a \(q^{\prime}\) with \(q^{\prime}.i=p.i\) and a smaller period, because \(\sigma_{i}q^{\prime}.i\) would then have to be at a different level in the fundamental domain \(\sigma_{i}F\).
10. If \(p\in K\), \(\exp(d_{X}(p,p_{0}))\sim 1\) because \(K\) is compact. On the other hand \(p=\Gamma g\in C_{i}\) such that \(g\in E\), then \(\sigma_{i}g=(z,v)\) for some \(z\) with \(\text{Im}(z)>\epsilon^{-1}\). Set \(q=\sigma_{i}^{-1}(z,i)\) and note that \(qh(t)\) is periodic with period \(\text{Im}(z)^{-1}=y_{i}^{0}(p)^{-1}\) because \(t\mapsto\sigma_{i}qh(t)\) is. There cannot be a \(q^{\prime}\)
hand, \(K.i\subset\Gamma\backslash\mathbb{H}\) is also compact and thus \(\overline{K.i}\subset\overline{F}\) as well. Consequently, the continuous functions \(y_{i}\) have to be bounded from above and below on this set, showing \(y_{0}(p)\sim 1\).
If \(p\in C_{i}\), \(y_{i}^{0}(p)>\epsilon^{-1}\) and \(y_{j}^{0}(p)\leq\epsilon^{-1}\) for \(j\neq i\), so \(y_{0}(p)=y_{i}^{0}(p)\). We can find a point \(\Gamma h\in K\) which is close enough to the cusp \(\Gamma r_{i}\) so that \(r_{i}\) has no equivalent boundary vertices in the Dirichlet domain \(D(h.i)\). Consider the fundamental domain \(E=T_{1}F\) with \(F^{\rm o}=D(h.i)\). Let \(g\) be the representative of \(p\) in \(E\). Note that because \(\sigma_{i}F\) has width at most \(1\), \(\sigma_{i}g\) and \(\sigma_{i}h\) essentially only differ in their \(a\) component in the Iwasawa decomposition, that is, their imaginary part. Furthermore, \(\operatorname{Im}(\sigma_{i}h)\sim 1\) because \(\Gamma h\in K\).
Thus by definition of the Dirichlet domain, the left invariance of the metric and the choice of the fundamental domain,
\[\exp(d_{X}(p,p_{0})) \sim\exp(d_{X}(p,\Gamma h))=\exp(d_{G}(g,h))\] \[=\exp(d_{G}(\sigma_{i}g,\sigma_{i}h))\sim\operatorname{Im}(\sigma _{i}g)=y_{i}(g)=y_{i}^{0}(p).\]
Now we are in the position to establish the connection between the fundamental period and \(r\).
**Proposition 4.2**.: _Let \(p\in X\) and \(T\geq 3\). Let \(C_{i}\), \(K\) as in Lemma 4.1 and fix some \(p_{0}\in K\). Let \(r=Te^{-\operatorname{dist}(g_{\log T}(p))}\) be as in Theorem 1.1. With \(y_{T}\) defined in the beginning of the chapter,_
\[r^{-1}\sim y_{T}.\]
_More explicitly, if \(g_{\log T}(p)\in C_{i}\), then \(r^{-1}(p)\sim y_{i}^{T}(p)\) and if \(g_{\log T}(p)\in K\), \(r^{-1}(p)\sim T^{-1}\sim y_{T}(p)\). All implied constants depend only on the choice of \(C_{i}\) and \(p_{0}\), so ultimately only on \(\Gamma\)._
Proof.: In light of Lemma 4.1, it suffices to show
\[y_{i}^{0}(g_{\log T}(p))T^{-1}\sim y_{i}^{T}(p).\]
Fix a representative \(g\) of \(p\). Pick some \(\gamma\in\Gamma\) and set
\[\begin{pmatrix}a&b\\ c&d\end{pmatrix}:=\sigma_{i}\gamma g\in PSL_{2}(\mathbb{R}),\]
where we choose \(c\) to be non-negative. Then
\[Y_{i}^{T}(\gamma g)=\min\left(\frac{1}{c^{2}+d^{2}},\frac{1}{c^{2}+(Tc+d)^{2}}\right)\]
by definition; this can be simplified because
\[\min\left(\frac{1}{c^{2}+d^{2}},\frac{1}{c^{2}+(Tc+d)^{2}}\right)\sim\min\left( \frac{1}{T^{2}c^{2}},\frac{1}{d^{2}}\right), \tag{4}\]
which is left to the reader as an exercise. The observation that for any \(r,s>0\),
\[\frac{1}{r+s}\sim\min\left(\frac{1}{r},\frac{1}{s}\right)\]
may come in handy showing this.
With (4) in hand,
\[T^{-1}Y_{i}^{0}(\gamma g_{\log T}(g))=\frac{1}{T^{2}c^{2}+d^{2}}\sim\min\left( \frac{1}{T^{2}c^{2}},\frac{1}{d^{2}}\right)\sim Y_{i}^{T}(\gamma g).\]
Taking the supremum over \(\gamma\) finishes the proof.
At this point, let us relate two other papers building on the results of Venkatesh and let us translate the respective conditions on the points into our notation.
**Remark 4.3**.: In two dimensions the Diophantine condition for \(\Gamma g\) (Equation (3.1.c) on page 11 in [8]) of McAdam is
\[\min_{\omega\in\mathbb{Z}^{2}\backslash\{0\}}\max_{0\leq t\leq T}\|\omega gh( t)\|_{\infty}\gg T^{\epsilon}\]
which translated in the notation of the proof is equivalent to
\[\min_{\gamma\in SL_{2}(\mathbb{Z})}\max(|c_{\gamma}|,|d_{\gamma}|,|d_{\gamma} +Tc_{\gamma}|)\gg T^{\epsilon}\]
with \(\begin{pmatrix}a_{\gamma}&b_{\gamma}\\ c_{\gamma}&d_{\gamma}\end{pmatrix}=\gamma g\). As in Proposition 4.2, the left hand side is asymptotically equal to \(y_{T}^{-\frac{1}{2}}\).
Zheng proves equidistribution of \(n^{1+\eta}\) for points fulfilling a \(\kappa\)-Diophantine condition, where \(\eta\) depends on \(\kappa\), [13]. In our notation, this \(\kappa=(\kappa_{1},\ldots,\kappa_{n})\)-condition for \(\kappa_{i}>0\) says that for any cusp there exist \(a_{i},b_{i}>0\) such that either \(|c_{\gamma}|>a_{i}\) or \(|d_{\gamma}|^{\kappa_{i}}|c_{\gamma}|>b_{i}\) for all \(\gamma\). It's easy to check that this condition implies \(Y_{i}^{T}(\gamma g)\ll T^{-\frac{2}{1+\kappa_{i}}}\), so by Proposition 4.2 again \(r\gg T^{\epsilon}\) for \(\epsilon<\min_{i}\frac{2}{1+\kappa_{i}}\).
In both cases Theorem 1.4 thus immediately implies the respective results.
To finish off this section, we prove Lemma 1.3, giving means to approximate the horocycle segment \(ph([0,T])\) with closed horocycles segments of period at most \(y_{T}^{-1}\sim r\). We restate Lemma 1.3 for the convenience of the reader.
**Lemma 1.3**.: _Let \(p\in X\) and \(T\geq 0\). Let \(\delta>0\) and \(K\leq T\). There is an interval \(I_{0}\subset[0,T]\) of size \(|I_{0}|\leq\delta^{-1}K^{2}\) such that: For all \(t_{0}\in[0,T]\backslash I_{0}\), there is a segment \(\{\xi h(t),t\leq K\}\) of a closed horocycle approximating \(\{ph(t_{0}+t),0\leq t\leq K\}\) of order \(\delta\), in the sense that_
\[\forall 0\leq t\leq K:\quad d_{X}\left(ph(t_{0}+t),\xi h(t)\right)\leq\delta.\]
_The period \(P=P(t_{0},p)\) of this closed horocycle is at most \(P\ll r\), where \(r=T\exp(-\mathrm{dist}(g_{\log T}(p)))\) is as in Theorem 1.1._
_Moreover, one can assure \(P\gg\eta^{2}r\) for some \(\eta>0\) by weakening the bound on \(I_{0}\) to \(|I_{0}|\leq\max\left(\delta^{-1}K^{2},\eta T\right)\)._
Proof.: The quantity \(r\) will play no role in the proof; we show that the period of the closed horocycles is bounded by \(P\ll y_{T}^{-1}\) and use Proposition 4.2. Recall that \(y_{T}\) is a maximum over the different cusps and the elements in \(\Gamma\). Let \(\sigma_{i}\) be the rotation with \(\sigma_{i}.r_{i}=\infty\) corresponding to the cusp \(r_{i}\) maximizing \(y_{T}\), that is such that \(y_{T}=y_{i}^{T}\). The rest of the approximation has nothing to do with \(\Gamma\), but is just an observation about approximating horocycle pieces by horizontal lines in \(PSL_{2}(\mathbb{R})\). The period only comes in because in the coordinate system induced by \(\sigma_{i}\), the height of the horizontal line is the same as the period of the horocycle in \(\Gamma\backslash G\).
Let \(g\) be a representative of \(p\) attaining the supremum in the definition of \(y_{i}^{T}\). The horocycle segment \(\{ph(t),0\leq t\leq T\}\) is then a circle segment in the modular plane as sketched in Figure 4. Write
\[\sigma_{i}g=:\begin{pmatrix}a&b\\ c&d\end{pmatrix}\]
and express points on the circle in term of its peak
\[l:=\sigma_{i}gh\left(-\frac{d}{c}\right)=:(\alpha+iR,-i).\]
In the Iwasawa decomposition, this is (see (2.3) in [9])
\[lh(s)=h\left(\alpha-\frac{Rs}{s^{2}+1}\right)a\left(\frac{R}{s^{2}+1}\right)k( -\mathrm{arccot}\ s).\]
Figure 3. An overview of the definitions, to the left when \(\sigma_{i}g\) and to the right when \(\sigma_{i}gh(T)\) minimizes the imaginary part.
Given some \(s\), we will approximate the horocycle segment \(\{lh(s+t),t\leq K\}\) with the periodic horocycle segment \(\{g_{0}h(t),t\leq K\}\), where \(g_{0}=:h(x_{0})a(y_{0})\) lies over the same point in the modular plane as \(gh(s)\) and the vector of \(g_{0}\) points straight up. The horocycle starting in \(g_{0}\) is then a horizontal line moving right.
It is thus only left to show that for all but a few exceptional \(s\), which will be the ones in an interval around \(0\), this approximation is good. To see this, fix some \(0\leq t\leq K\) and compare
\[lh(s+t)=h\left(\alpha-\frac{R(s+t)}{(s+t)^{2}+1}\right)a\left(\frac{R}{(s+t)^ {2}+1}\right)k(-\text{arccot }(s+t))\]
with
\[g_{0}h(t)=h\left(\alpha-\frac{Rs}{s^{2}+1}\right)a\left(\frac{R}{s^{2}+1} \right)h(t)=h\left(\alpha-\frac{R(s-t)}{s^{2}+1}\right)a\left(\frac{R}{s^{2}+1 }\right).\]
Firstly, note that
\[|\text{arccot}(s+t)|\ll\left|\frac{1}{s+t}\right|\leq\delta\]
provided that \(|s|\geq\delta^{-1}K\). Secondly, note that
\[d_{G}\left(a\left(\frac{R}{(s+t)^{2}+1}\right),a\left(\frac{R}{s^{2}+1} \right)\right)=\left|\log\left(\frac{(s+t)^{2}+1}{s^{2}+1}\right)\right|\ll\delta,\]
for any \(t\leq K\) provided that \(|s|\geq\delta^{-1}K\) because
\[\frac{d}{dt}\log\left(\frac{(s+t)^{2}+1}{s^{2}+1}\right)=\frac{2(s+t)}{(t+s)^ {2}+1}\ll\frac{1}{|s|}\leq\delta K^{-1}.\]
Remembering the left-invariance of the metric and using the triangle inequality, this implies that
\[d_{G}\left(lh(s+t),h\left(\alpha-\frac{R(s+t)}{(s+t)^{2}+1}\right)a\left( \frac{R}{s^{2}+1}\right)\right)\ll\delta.\]
Finally,
\[d_{G}\left(h\left(\alpha-\frac{R(s+t)}{(s+t)^{2}+1}\right)a\left( \frac{R}{s^{2}+1}\right),g_{0}h(t)\right)\] \[=\frac{s^{2}+1}{R}d_{G}\left(h\left(\alpha-\frac{R(s+t)}{(s+t)^{2} +1}\right),h\left(\alpha-\frac{R(s-t)}{s^{2}+1}\right)\right)\] \[=\frac{s^{2}+1}{R}\left|\frac{R(s+t)}{(s+t)^{2}+1}-\frac{R(s-t)}{ s^{2}+1}\right|\] \[=\left|\frac{(s+t)(s^{2}+1)-(s-t)((s+t)^{2}+1)}{(s+t)^{2}+1}\right|\] \[=\left|\frac{st^{2}+t^{3}+2t}{(s+t)^{2}+1}\right|\ll\delta\]
where the last inequality holds provided that \(|s|\geq\delta^{-1}K^{2}\). Putting everything together, we deduce that
\[d_{G}(lh(s+t),g_{0}h(t))\ll\delta\]
provided that \(|s|\geq\delta^{-1}K^{2}\). We then set \(\xi:=\Gamma\sigma_{i}^{-1}g_{0}\), which is a periodic horcycle with period \(y_{0}^{-1}\) as
\[\sigma_{i}^{-1}g_{0}h(y_{0}^{-1})=\sigma_{i}^{-1}h(1)g_{0}=\sigma_{i}^{-1}h(1 )\sigma_{i}\sigma_{i}^{-1}g_{0}=\gamma_{i}\sigma_{i}^{-1}g_{0},\]
where we recall from the beginning of this section that \(\sigma_{i}\) was the element such that \(\sigma_{i}\gamma_{i}\sigma_{i}^{-1}=h(1)\) and \(\gamma_{i}\in\Gamma\) is the unipotent element inducing the periodicity of the closed horocycles corresponding to \(r_{i}\). For any point \(s\) such that \(lh(s)\in gh([0,T])\), by definition of the fundamental period, \(y_{0}\geq y_{T}\), so that the period of the horocycle \(\{\xi h(t),t\in\mathbb{R}\}\) is indeed bounded by \(y_{T}^{-1}\).
Recalling \(lh(s)=\sigma_{i}gh\left(\frac{d}{c}+s\right)\), we can then set the exceptional interval \(I_{0}\) to be
\[I_{0}:=\left\{t\in[0,T]:\;\left|\frac{d}{c}+t\right|\leq\delta^{-1}K^{2}\right\}.\]
This assures that our estimates hold except for \(t\in I_{0}\) and obviously, \(|I_{0}|\ll\delta^{-1}K^{2}\).
Regarding the second point, we want to make sure that for any \(s\) outside of an interval around \(0\) we have that
\[\text{Im}(\text{lh(s)})=\frac{R}{s^{2}+1}\ll\eta^{-2}y_{T}.\]
Let \(s_{0}\) be such that either \(lh(s_{0})=\sigma_{i}g\) or \(lh(s_{0})=\sigma_{i}gh(T)\), depending on which of the two points minimizes the imaginary part as in the definition of \(y_{T}\); we then have that \(y_{T}=\frac{R}{s_{0}^{2}+1}\).
Now, the points \(s\) such that \(lh(s)\) lies on the horocycle orbit \(\sigma_{i}gh([0,T])\), lie either in the interval \([s_{0},s_{0}+T]\) or \([s_{0}-T,s_{0}]\), again depending on
which of the two points \(\sigma_{i}g,\sigma_{i}gh(T)\) is minimizing. If \(|s_{0}|\geq 2T\), we have that for any such \(s\)
\[\text{Im}(\text{lh(s)})\ll\frac{R}{(|s_{0}|-T)^{2}+1}\ll y_{T}.\]
If not, we can impose \(|s|>\eta T\) to assure
\[\text{Im}(\text{lh(s)})\ll\frac{R}{\eta^{2}T^{2}+1}\ll\frac{y_{T}T^{2}}{\eta^ {2}T^{2}}\ll y_{T}\eta^{-2}.\]
We can set \(I_{0}\) as before, but this time with the condition \(|s_{0}+t|\leq\eta T\).
## 5. Equidistribution of \(\nu\)
In this section, we are going to prove Theorem 1.1. We will set \(\theta=\frac{\beta}{40}\), where \(\beta\) is the constant from Theorem 1.4 depending only on the smallest eigenvalue of the Laplacian on \(\Gamma\). In the case \(\Gamma=PSL_{2}(\mathbb{Z})\), this makes \(\frac{1}{2880}\) an admissible value for \(\theta\).
The proof is split into different cases depending on the time parameter \(T\). To start off, we will cover the case of good asymptotics, where \(r\gg T^{\frac{1}{20}}\), and \(g_{\log T}(\xi)\) is far away from all cusps. In this case, Theorem 1.4 is sufficient to prove good equidistribution of \(\sum_{n\leq T/s}f(\xi h(sn))\) for all \(s\leq R=T^{\theta}\). This immediately implies that case of Theorem 1.1.
**Proposition 5.1**.: _Let \(p\in X\), \(f\) such that \(\|f\|=1\). Then, for all \(T\) such that \(r\gg T^{\frac{1}{20}}\),_
\[\left|\frac{1}{T}\sum_{n\leq T}f(ph(n))\nu(n)-\int f\ d\mu_{X}\right|\ll\frac{ \log\log R}{\log R}.\]
Proof.: Assume that \(\int f\ d\mu=0\), which picks up an error term of \(\frac{\log\log R}{\log R}\) from the normalization of \(\nu\) proven in Theorem 2.4. By Theorem 1.4,
\[\left|\frac{s}{T}\sum_{1\leq sj\leq T}f(ph(sj))\right|\leq s^{\frac{1}{2}}r^{ -\frac{\beta}{2}}.\]
Unboxing the definition of \(\nu\), we find
\[\left|\frac{1}{T}\sum_{n\leq T}f(ph(n))\nu(n)\right|=\left|\sum_{n\leq T}\sum_ {\begin{subarray}{c}e,d\leq R\\ e,d|n\end{subarray}}\frac{\mu(e)\mu(d)f(ph(n))}{T\log R}\log\left(\frac{R}{d} \right)\log\left(\frac{R}{e}\right)\right|\]
\[\leq\sum_{e,d\leq R}\frac{\log R}{[d,e]}\left|\frac{[d,e]}{T}\sum_{n\leq\frac {T}{[e,d]}}f(ph([e,d]n))\right|\leq\sum_{e,d\leq R}\frac{\log R}{\sqrt{[d,e]}} r^{-\frac{\beta}{2}}\]
\[\leq r^{-\frac{\beta}{2}}\sum_{m\leq R}\sum_{e,d\leq\frac{R}{m}}\frac{\log R }{\sqrt{edm}}\ll r^{-\frac{\beta}{2}}\sqrt{R}\log R,\]
where we ordered the terms according to their greatest common divisor \(m\).
In the case that \(r\ll T^{\frac{1}{20}}\), we will use Lemma 1.3 to reduce to closed horocycles of small period and use Theorem 1.4 together with the Siegel-Walfisz type Theorem 2.4 to conclude the proof.
Proof of Theorem 1.1.: Let \(p\in X\) and \(T\) be given such that \(r\ll T^{\frac{1}{20}}\). Assume that \(\|f\|=1\) and fix some \(\delta>0\) to be determined later. Set \(K:=T^{\frac{1}{3}}\). Apply Lemma 1.3 to split the interval \([0,T]\) into intervals \([t_{j},t_{j}+K]\) such that for all but on a \(\delta\) proportion of them,
\(t\leq K\) is at distance at most \(\delta\) from \(\{\xi_{j}h(t),t\leq K\}\), where this is a closed horocycle of period \(P_{j}\) with \(\delta^{2}r\ll P_{j}\ll r\). We know by Strombergsson's result ([11]) or from Theorem 1.4 that
\[\left|\frac{1}{T}\sum_{n\leq T}f(ph(n))\nu(n)-\int f\;d\mu_{X}\right|\] \[\leq\left|\frac{1}{T}\sum_{n\leq T}f(ph(n))\nu(n)-\frac{1}{T} \int_{0}^{T}f(ph(t))\;dt\right|+O(r^{-\beta})\] \[\ll O(r^{-\beta}+\delta)+\frac{K}{T}\sum_{j}\frac{1}{K}\left|\sum_ {n\leq K}f(\xi_{j}h(n))\nu(n)-\int_{0}^{K}f(\xi_{j}h(t))\;dt\right|.\]
Fix some \(j\) and set \(y:=P_{j}^{-1}\). Set \(F(t):=f(\xi_{j}h(ty^{-1}))\), which is a \(1\)-periodic function and is \(y^{-1}\)-Lipschitz by the bounds on \(f\). It thus only remains to show that
\[\frac{1}{K}\sum_{n\leq K}F(yn)\nu(n)\]
is like
\[\int F:=\int_{0}^{1}F(t)dt.\]
We want to apply Theorem 2.4 and to do so, we need to get from a function periodic on \([0,1]\) to a function periodic on the integers. To this end, we approximate \(y\) with a rational up to \(R^{3}y^{-3}\). That is, we use the Dirichlet box principle to find \(y^{-1}\leq q\leq R^{3}y^{-3}\) and \((a,q)=1\) such that
\[\left|y-\frac{a}{q}\right|<\frac{1}{qR^{3}y^{-3}}.\]
Pick some \(M\) and consider how much the function \(n\mapsto F(yn)\) can diverge from a truly \(q\)-periodic function on an interval \(\{m_{0},\ldots,m_{0}+qM\}\). Comparing \(F(y(m_{0}+qM))\) to \(F(ym_{0}+aM)=F(ym_{0})\) for some \(m_{0}\), we get that
\[|F(y(m_{0}+qM))-F(ym_{0})|\leq y^{-1}|yqM-aM|\leq\frac{My^{-1}}{R^{3}y^{-3}}.\]
This is \(O(y)\) provided that \(qM\leq qR^{3}y^{-1}\).
Truncate into intervals of length approximately \(qR^{3}\); as we have just seen, the function \(n\mapsto F(yn)\) is at distance \(O(y)\) from a \(q\)-periodic function on each one. We can thus apply Theorem 2.4 on each interval to deduce that
\[\frac{1}{K}\sum_{n\leq K}F(yn)\nu(n)=\frac{q}{\phi(q)K}\sum_{\begin{subarray}{ c}n\leq K\\ (n,q)=1\end{subarray}}F(yn)+O(y)+O\left(\frac{\log\log R}{\log R}\right).\]
To show that the sum on the right is like \(\int F\), we need one more claim.
**Claim 5.2**.: _Let \(\epsilon=\frac{\beta}{12}\)._
\[\left|\frac{s}{K}\sum_{sn\leq K}F(ysn)-\int F\right|\leq q^{-\epsilon}\]
_for all \(s|q\) such that \(s\leq q^{\epsilon}\)._
Before we show the claim, let us see how it allows us to conclude the proof. We use the identity \(1_{m=1}=\sum_{d|m}\mu(d)\) to find
\[\sum_{\begin{subarray}{c}n\leq K\\ (n,q)=1\end{subarray}}F(yn) =\sum_{n\leq K}\sum_{d|(n,q)}\mu(d)F(yn)\] \[=\sum_{d|q}\mu(d)\sum_{\begin{subarray}{c}n\leq K\\ d|n\end{subarray}}F(yn)=\sum_{d|q}\mu(d)\sum_{nd\leq K}F(ydn).\]
Decomposing the sum and using Claim 5.2, we see that
\[\frac{q}{\phi(q)K}\sum_{\begin{subarray}{c}n\leq K\\ (n,q)=1\end{subarray}}F(yn) \leq\frac{q}{\phi(q)}\sum_{\begin{subarray}{c}d|q\\ d<q^{\epsilon}\end{subarray}}\frac{1}{d}\left|\frac{d}{K}\sum_{dn\leq K}F(ydn) \right|+\frac{q}{\phi(q)}\sum_{\begin{subarray}{c}d|q\\ d\geq q^{\epsilon}\end{subarray}}\frac{\|F\|_{\infty}}{d}\] \[\leq 2\frac{q^{1-\epsilon}\tau(q)}{\phi(q)}.\]
By standard asymptotics of \(\phi\) and \(\tau\) (see for example [6]), the right-hand side is
\[O(q^{-\frac{6\epsilon}{7}})=O(y^{\frac{\beta}{14}})=O\left(\delta^{-\frac{ \beta}{7}}r^{\frac{\beta}{14}}\right).\]
We can choose \(\delta=r^{-\frac{1}{5}}\) to get the desired conclusion. It thus only remains to show Claim 5.2.
To prove Claim 5.2, we divide into two cases. Firstly, in the case that \(q\leq y^{-3}\), we apply Strombergsson's result (or Theorem 1.4) to the periodic horocycle \(\xi_{j}h(t)\), for which \(r(\xi_{j},T)=y^{-1}\) for any \(T\), to see
\[\int F=y\int_{0\leq t\leq y^{-1}}f(\xi_{j}h(t))dt=\int\ f\ d\mu_{X}+O(y^{\beta }).\]
Using this and applying Theorem 1.4 to the same periodic horocycle piece, we see
\[\forall s\leq y^{\frac{\beta}{4}}:\ \ \left|\frac{s}{K}\sum_{1\leq j\leq K/s}F( ysj)-\int F\right|\ll s^{\frac{1}{2}}y^{\frac{\beta}{2}}\ll y^{\frac{\beta}{4}}.\]
As \(q^{\epsilon}\leq y^{\frac{\beta}{4}}\), we deduce Claim 5.2 in this case.
For the second case, assume that \(q\geq y^{-3}\). Roughly speaking, in this case there are so many distinct points of the form \(sn\frac{a}{q}\) in the interval \([0,1]\) that they cannot help being dense enough to approximate \(\int F\) by force.
We split \([0,K]\) into intervals \(I\) of length \(qR^{3}y^{-1}\). Fix \(s\leq q^{\epsilon}\) and set \(q^{\prime}:=\nicefrac{{q}}{{s}}\). Fix an interval \(I\) and call its left endpoint \(st_{0}\). We note that as for any \(n\) such that \(sn\in I\),
\[\left|F(ysn)-F\left(yst_{0}+\frac{sa}{q}(n-t_{0})\right)\right| \leq y^{-1}\left|y-\frac{a}{q}\right|\left|sn-st_{0}\right|\] \[\leq\frac{y^{-1}|I|}{qR^{3}y^{-3}}\leq y;\]
set \(x_{0}:=st_{0}\left(y-\frac{a}{q}\right)\) and note that then
\[\frac{s}{|I|}\sum_{sn\in I}F(ysn)=O(y)+\frac{s}{|I|}\sum_{sn\in I} F\left(x_{0}+n\frac{as}{q}\right)\] \[=O(y)+O\left(\frac{sq^{\prime}}{|I|}\right)+\frac{s}{q}\sum_{n \leq q^{\prime}}F\left(x_{0}+n\frac{a}{q^{\prime}}\right),\]
where we use in the second line that the function \(F(x_{0}+\frac{san}{q})\) is \(q^{\prime}\) periodic in \(n\). The number \(a\) is coprime to \(q^{\prime}\), so it plays no role and can be dropped; we then only have to evaluate
\[\frac{1}{q^{\prime}}\sum_{n\leq q^{\prime}}F\left(x_{0}+\frac{n}{q^{\prime}} \right).\]
But for any \(t\in(0,1)\) and any \(n\),
\[\left|F\left(x_{0}+\frac{n}{q^{\prime}}\right)-F\left(x_{0}+\frac{n+t}{q^{ \prime}}\right)\right|\leq y^{-1}\frac{1}{q^{\prime}}\leq y^{-1}y^{3(1-\epsilon )}\leq y,\]
which implies that
\[\frac{1}{q^{\prime}}\sum_{n\leq q^{\prime}}F\left(x_{0}+\frac{n}{ q^{\prime}}\right)=O(y)+\frac{1}{q^{\prime}}\sum_{n\leq q^{\prime}}\int_{0}^{1} F\left(x_{0}+\frac{n+t}{q^{\prime}}\right)\ dt\] \[=O(y)+\frac{1}{q^{\prime}}\int_{0}^{q^{\prime}}F\left(\frac{t}{q^ {\prime}}\right)\ dt=O(y)+\int F\]
This shows Claim 5.2 also in the second case, which, as we have seen, concludes the proof of Theorem 1.1. | This paperはSarnak and Ubisの\cite{sarnak-ubis}の論文の、素数の集中しないHorocycleorbitを $PSL_2(\mathbb{Z})\backslash PSL_2(\mathbb{R})$ に一般化した論文を、$PSL_2(\mathbb{R})$の任意の格子に一般化した。 証明は、Str\"ombergsson\cite{strombergsson}の漸近的結果とVenkateshの方法は\cite{venkatesh}を組み合わせ、Sarnak and UbisのHorocycleの分割を周期的なHorocycleと近似する方法を用いた。重要なステップは、$\Gamma \backslash PSL_2(\mathbb{R})$で $\{\xi h(t), t \in [0, T] \}$ が良好な等距離分布を持つことと、小さな周期で閉Horocycleの分割で近似できることの二元化である。その後、論文で |
2305.03669 | Crossing Symmetric Dispersion Relations without Spurious Singularities | Recently, there has been renewed interest in a crossing-symmetric dispersion
relation from the 1970s due to its implications for both regular quantum field
theory and conformal field theory. However, this dispersion relation introduces
nonlocal spurious singularities and requires additional locality constraints
for their removal, a process that presents considerable technical challenges.
In this Letter, we address this issue by deriving a new crossing-symmetric
dispersion relation that is free of spurious singularities, resulting in a
compact form of the contact terms in crossing-symmetric blocks. Our results
establish a solid foundation for the Polyakov bootstrap in conformal field
theories and the crossing-symmetry S-matrix bootstrap in quantum field
theories. | Chaoming Song | 2023-05-05T16:38:44 | http://arxiv.org/abs/2305.03669v2 | # Crossing Symmetric Dispersion Relations without Spurious Singularities
###### Abstract
Recently, there has been renewed interest in a crossing-symmetric dispersion relation from the 1970s due to its implications for both regular quantum field theory and conformal field theory. However, this dispersion relation introduces nonlocal spurious singularities and requires additional locality constraints for their removal, a process that presents considerable technical challenges. In this Letter, we address this issue by deriving a new crossing-symmetric dispersion relation that is free of spurious singularities, resulting in a compact form of the contact terms in crossing-symmetric blocks. Our results establish a solid foundation for the Polyakov bootstrap in conformal field theories and the crossing-symmetry S-matrix bootstrap in quantum field theories.
**Introduction:** The recent revival in crossing-symmetric dispersion relations [1; 2] has sparked considerable interest in both quantum field theory (QFT) [3] and conformal field theory (CFT) [4; 5]. In contrast to traditional \(t\)-fixed dispersion relations, which display symmetry in only two channels [6; 7], crossing-symmetric dispersion relations impose no additional constraints and are in perfect accord with Feynman diagram expansions. Within the CFT domain, four-point correlation functions must adhere to crossing symmetry constraints. Numerical bootstrap typically enforces this crossing symmetry on the conformal block expansion. Alternately, Polyakov introduced a conformal bootstrap using crossing-symmetric blocks [8], an approach that has recently proven effective in Mellin space [9; 10; 11]. This method employs a basis connected to exchange Witten diagrams, although contact terms remain undetermined [12; 13]. Resolving these terms continues to pose a considerable challenge [14; 15; 16; 17; 18; 19; 20; 21].
Gopakumar et al.[4] recently observe that these contact term ambiguities are fully determined using a crossing-symmetric dispersion relation, initially developed by Auberson and Khuri (AK) [1] and later revisited by Sinha and Zahed [3]. However, the AK dispersion relation presents spurious singularities that violate locality. Therefore, additional locality constraints are manually imposed to remove these unphysical terms. In theory, after removing these singularities, crossing-symmetric dispersion relations allow for a Feynman/Witten diagram expansion and entirely fix the contact terms. In line with this approach, a closed form of the contact terms has been proposed [22]. Nevertheless, the complexity of analyzing singularities restricts its practical application to lower spins, thereby complicating the implementation of the Polyakov bootstrap.
In this paper, we propose a new dispersion relation that manifests both crossing symmetry and locality. We discover a novel approach to directly remove nonlocal singularities, resulting in a closed form of the singularity-free dispersion relation. Consequently, we present the Feynman/Witten expansion of crossing-symmetric blocks, providing explicit determinations of all contact terms. Furthermore, we develop the full dispersion relation without assuming crossing-symmetric amplitudes, enabling the application of our findings to a wide range of problems. For instance, our work establishes a solid foundation for the Polyakov bootstrap, where the only remaining non-trivial constraint is the Polyakov condition [8; 10]. Moreover, our approach yields a novel functional sum rule for the crossing-symmetric bootstrap, eliminating the need for power series expansions.
**Singularity-free dispersion relation:** We begin with the shifted Mandelstam variables \(s_{1}=s-\mu/3\), \(s_{2}=t-\mu/3\), and \(s_{3}=u-\mu/3\) satisfying the constraint \(s_{1}+s_{2}+s_{3}=0\), where \(s\), \(t\), and \(u\) are the usual Mandelstam variables. For regular QFT, we have \(\mu=4m^{2}\), while for CFT, we have \(\mu=2\Delta_{\phi}\). We consider hypersurfaces \((s_{1}-a)(s_{2}-a)(s_{3}-a)=-a^{3}\), and rewrite \(s_{k}(z,a)=a-a(z-z_{k})^{3}/(z^{3}-1)\), where \(z_{k}\) are cube roots of unity [1]. Note that we can express \(a=y/x\), where \(x\equiv-(s_{1}s_{2}+s_{2}s_{3}+s_{3}s_{1})\) and \(y\equiv-s_{1}s_{2}s_{3}\). Instead of a dispersion relation in \(s\) for fixed \(t\), we can write down a twice subtracted dispersion relation in the variable \(z\), for fixed \(a\). The full crossing-symmetric dispersion relation is quite involved, and we refer the readers to Ref. [1] for more details. A full singularity-free dispersion relation is set to be proposed in a subsequent section.
Our discussion below primarily focuses on the completely crossing-symmetric scattering amplitudes, such as pion-pion scattering in QFT or the Mellin amplitude for a four-point correlation of identical scalars in CFT [23; 24]. For a crossing-symmetric amplitude \(\mathcal{M}^{(s)}\), the dispersion relation simplifies dramatically in terms of \(\mathbf{s}\equiv\{s_{1},s_{2},s_{3}\}\), as
\[\mathcal{M}^{(s)}(\mathbf{s})=\alpha_{0}+\frac{1}{\pi}\int \frac{d\sigma}{\sigma}\mathcal{A}\left(\sigma,s_{\pm}\left(\sigma, \frac{a}{\sigma-a}\right)\right) \tag{1}\] \[\times H(\sigma;\mathbf{s}),\]
where \(\mathcal{A}(s_{1},s_{2})\) is the s-channel discontinuity, symmetric under the exchange of \(t\) and \(u\) channels, i.e., \(\mathcal{A}(s_{1},s_{2})=\mathcal{A}(s_{1},s_{3})\). The constant \(\alpha_{0}\equiv\mathcal{M}^{(s)}(0,0)\), and the func
tions \(H(\sigma;\mathbf{s})\) and \(s_{\pm}(\sigma,\eta)\) are defined as:
\[H(\sigma;\mathbf{s}) \equiv\frac{s_{1}}{\sigma-s_{1}}+\frac{s_{2}}{\sigma-s_{2}}+\frac{ s_{3}}{\sigma-s_{3}},\] \[s_{\pm}(\sigma,\eta) \equiv\sigma\frac{-1\pm\sqrt{1+4\eta}}{2},\]
where \(s_{+}s_{-}=-\sigma^{2}\eta\), and \(s_{+}+s_{-}=-\sigma\). Setting \(\eta=a/(\sigma-a)\) and \(s_{1}=\sigma\) solves \(s_{2}=s_{\pm}\) and \(s_{3}=s_{\mp}\) from the definition above. Note that \(\mathcal{A}(\sigma,s_{+})=\mathcal{A}(\sigma,s_{-})\), and thus the validity of Eq. (1) is independent of the choice of \(s_{+}\) or \(s_{-}\).
Equation (1) is manifestly crossing symmetric, allowing the scattering amplitude
\[\mathcal{M}^{(s)}(\mathbf{s})=\sum_{p,q}\mathcal{M}^{(s)}_{p,q}x^{p}y^{q}, \tag{2}\]
to be expanded in terms of crossing-symmetric variables \(x\) and \(y\). However, the AK dispersion relation (1) involves the variable \(a\) and, therefore, leads to negative powers of \(x\) in the expansion (2). These spurious singularities are known to violate locality [3]. To obtain the physical scattering amplitude, additional locality constraints must be imposed to enforce the vanishing of these non-physical terms in Eq. (2). Formally, a singularity-free dispersion relation requires computing the regular part
\[R\equiv\mathcal{R}\left\{\mathcal{A}\left(\sigma,s_{\pm}\left(\sigma,\frac{a}{ \sigma-a}\right)\right)H(\sigma;\mathbf{s})\right\}, \tag{3}\]
where \(\mathcal{R}\{\ldots\}\) denotes a formal regularization with the negative power of \(x\) terms being removed.
To obtain a closed form of the regular part \(R\), we first rewrite \(H(\sigma,\mathbf{s})=(2\sigma^{3}-y)H_{0}(\sigma,\mathbf{s})-2\), where
\[H_{0}(\sigma,\mathbf{s})\equiv\frac{1}{(\sigma-s_{1})(\sigma-s_{2})(\sigma-s _{3})}=\frac{1}{\sigma^{3}+y-\sigma x},\]
corresponds to the poles. Notice that multiplying the factor \(a\) with a regular function \(f(x,y)\),
\[\hat{a}f(x,y)\equiv\mathcal{R}\{af(x,y)\}\]
acts as a lowering operator \(\hat{a}|n\rangle=y|n-1\rangle\), with \(\hat{a}|0\rangle=0\), where \(|n\rangle\equiv x^{n}\) denotes the \(n\)-th power of \(x\). Specifically, we obtain
\[\hat{a}^{n}H_{0} =\frac{1}{\sigma^{3}+y}\sum_{m=0}^{\infty}\left(\frac{\sigma}{ \sigma^{3}+y}\right)^{m}\hat{a}^{n}x^{m}\] \[=\frac{1}{\sigma^{3}+y}\sum_{m=n}^{\infty}\left(\frac{\sigma}{ \sigma^{3}+y}\right)^{m}y^{n}x^{m-n}\] \[=\left(\frac{\sigma y}{\sigma^{3}+y}\right)^{n}H_{0},\]
which suggests that
\[F(\hat{a},y)H_{0}(\sigma,\mathbf{s})=F\left(\frac{\sigma y}{\sigma^{3}+y},y \right)H_{0}(\sigma,\mathbf{s}) \tag{4}\]
for any function \(F(a,y)\) admitting a Taylor expansion in terms of \(a\). Substituting Eq. (4) into Eq. (3) and noting \(F(\hat{a},y)f(y)=F(0,y)f(y)\) lead to
\[R=\mathcal{A}\left(\sigma,s_{\pm}\left(\sigma,y/\sigma^{3}\right)\right)(2 \sigma^{3}-y)H_{0}(\sigma,\mathbf{s})-2\mathcal{A}\left(\sigma,s_{\pm}\left( \sigma,0\right)\right).\]
Therefore, we obtain the singularity-free (SF) dispersion relation
\[\mathcal{M}^{(s)}(\mathbf{s})=\alpha_{0}+\frac{1}{\pi}\int\frac{d\sigma}{ \sigma}\left(\frac{(2\sigma^{3}+s_{1}s_{2}s_{3})\mathcal{A}\left(\sigma,s_{\pm }\left(\sigma,-s_{1}s_{2}s_{3}/\sigma^{3}\right)\right)}{(\sigma-s_{1})(\sigma -s_{2})(\sigma-s_{3})}-2\mathcal{A}(\sigma,0)\right), \tag{5}\]
where the locality constraints are automatically satisfied, as we will show explicitly in the next section.
**Block expansion and contact terms:** To facilitate the analysis of the s-channel discontinuity, it is common practice to expand it in terms of the partial waves with _even_ spins, as
\[\mathcal{A}(s_{1},s_{2})=\sum_{\ell}\int d\lambda f_{\ell}(\sigma,\lambda)Q_{ \lambda,\ell}(s_{1},s_{2}),\]
where the partial wave \(Q_{\lambda,\ell}(s_{1},s_{2})=Q_{\lambda,\ell}(s_{1},s_{3})\) is a symmetric polynomial of order \(\ell\) that is invariant under the exchange of the _ut_ channels, and the spectrum \(f_{\ell}(\sigma,\lambda)\) encodes scattering data. For QFT, we express \(Q_{0,\ell}(s_{1},s_{2})\equiv(s_{1}-2\mu/3)^{\ell}C_{\ell}^{(\frac{d-3}{2})} \left(\frac{s_{2}-s_{3}}{s_{1}-2\mu/3}\right)\) in terms of Gegenbauer polynomials, and \(f_{\ell}(\sigma,\lambda)=(\sigma-2\mu/3)^{-\ell}\Phi(\sigma)(2\ell+2\alpha) \alpha_{\ell}(\sigma)\delta(\lambda)\) with \(\Phi(\sigma)\equiv\Psi((d-3)/2)(\sigma+\mu)^{1/2}/(\sigma-2\mu/3)^{(d-3)/2}\) with a real number \(\Psi((d-3)/2)\). For CFT, we express \(Q_{\lambda,\ell}(\mathbf{s})=P_{\Delta-d/2,\ell}(s_{1}+2\Delta_{\phi}/3,s_{2}- \Delta_{\phi}/3)\) in terms of Mack polynomials, and \(f_{\ell}(\sigma,\lambda)\equiv\sum_{\Delta,k}C_{\Delta,\ell}N_{\Delta,k}\delta (\sigma-s_{k})\delta(\lambda-\Delta)\) encodes the operator product expansion (OPE) data.
The scattering amplitude can also be expressed as
\[\mathcal{M}^{(s)}(\mathbf{s})=\alpha_{0}+\frac{1}{\pi}\sum_{\ell=0}^{\infty} \int d\sigma d\lambda f_{\ell}(\sigma,\lambda)M_{\lambda,\ell}(\sigma;\mathbf{s}),\]
where \(M_{\lambda,\ell}(\sigma;\mathbf{s})\) are scattering blocks. Comparing AK
dispersion relation (1), we obtain the Dyson block [3],
\[M_{\lambda,\ell}^{(D)}=\frac{1}{\sigma}Q_{\lambda,\ell}\left(\sigma,s_{\pm}\left( \sigma,\frac{a}{\sigma-a}\right)\right)H(\sigma;\mathbf{s}), \tag{6}\]
which contains spurious singularities. By contrast, our dispersion relation (5) leads to the singularity-free block
\[M_{\lambda,\ell}^{(SF)}=\frac{(2\sigma^{3}-y)Q_{\lambda,\ell}(\sigma,s_{\pm}( \sigma,y/\sigma^{3}))}{\sigma(\sigma-s_{1})(\sigma-s_{2})(\sigma-s_{3})}- \frac{2}{\sigma}Q_{\lambda,\ell}(\sigma,0). \tag{7}\]
To show explicitly SF block \(M_{\lambda,\ell}^{(SF)}\) removes spurious singularities present in the Dyson block \(M_{\lambda,\ell}^{(D)}\), we take the QFT case as an example. We start with the Gegenbauer polynomials \(C_{\ell}^{(\frac{d-2}{3})}(\sqrt{\xi})\) where \(\xi=(s_{+}(\sigma,\eta)-s_{-}(\sigma,\eta))^{2}/(\sigma_{1}-2\mu/3)^{2}=\xi_ {0}(1+4\eta)\), where \(\xi_{0}\equiv\sigma^{2}/(\sigma-2\mu/3)^{2}\). We set \(\eta=a/(\sigma-a)\) and expand the Gegenbauer polynomials around \(\xi_{0}\), giving us [1; 3]
\[M_{\lambda,\ell}^{(D)}=\frac{1}{\sigma}\sum_{n,m=0}^{\infty}\mathcal{B}_{n,m} ^{(\ell)}x^{n}(y/x)^{m},\]
where \(p_{\ell}^{(k)}\equiv\partial_{\xi}^{k}C^{(d-3)/2}(\sqrt{\xi}_{0})\), and
\[\mathcal{B}_{n,m}^{(\ell)}=\sum_{k=0}^{m}\frac{p_{\ell}^{(k)}(4\xi_{0})^{k}(3 k-m-2n)(-n)_{m}}{\pi\sigma^{2n+m}k!(m-k)!(-n)_{k+1}}.\]
Similarly, expanding the Gegenbauer polynomials around \(\xi_{0}\) with \(\eta=y/\sigma^{3}\) leads to
\[M_{\lambda,\ell}^{(SF)}=\frac{1}{\sigma}\sum_{n,m=0}^{\infty}\mathcal{C}_{n,m }^{(\ell)}x^{n}y^{m},\]
where
\[\mathcal{C}_{n,m}^{(\ell)}=\sum_{k=0}^{m}\frac{p_{\ell}^{(k)}(4\xi_{0})^{k}(- 1)^{m-k}(2n+3(m-k))}{\pi\sigma^{2n+3m}n!k!(m-k)!}\]
It is easy to verify that \(\mathcal{C}_{n,m}^{(\ell)}=\mathcal{B}_{n+m,m}^{(\ell)}\) for \(n,m\geq 0\), indicating that the regular part of the Dyson blocks matches with the SF blocks, as expected. However, the Dyson blocks have spurious singularities with a negative power of \(x\) when \(n<m\), which is absent in our SF blocks. A similar deviation can be obtained for general partial waves \(Q_{\lambda,\ell}\).
The singularity-free (SF) block provides a block expansion for the amplitude that directly relates to the usual Feynman and Witten diagrammatic expansions for QFT and CFT, respectively. To see this, we will show below that the SF block can be written as a summation of exchange and contact terms, as follows:
\[M_{\lambda,\ell}^{(SF)}(\sigma;\mathbf{s})=\sum_{i=1}^{3}M_{\lambda,\ell}^{(i )}(\sigma;\mathbf{s})+M_{\lambda,\ell}^{(c)}(\sigma;\mathbf{s}), \tag{8}\]
where the exchange term of channel \(i\) is given by
\[M_{\lambda,\ell}^{(i)}(\sigma;\mathbf{s})=Q_{\lambda,\ell}(s_{i},s_{i+1}) \left(\frac{1}{\sigma-s_{i}}-\frac{1}{\sigma}\right),\]
for \(i=1,2,3\) with the cyclic condition \(i+1=1\) for \(i=3\). The contact terms \(M_{\lambda,\ell}^{(c)}(\sigma;\mathbf{s})\) involve polynomials of \(s_{i}\)'s, whose explicit form is previously only known for few lower order terms.
We substitute Eq. (7) into Eq. (8), obtaining
\[M_{\lambda,\ell}^{(c)}(\sigma;\mathbf{s})=\frac{1}{\sigma}\sum_{i=1}^{3}\frac {s_{i}\Delta Q_{\lambda,\ell}^{(i)}}{(\sigma-s_{i})}+\frac{2}{\sigma}\Delta Q_{ \lambda,\ell}^{(0)}, \tag{9}\]
where
\[\Delta Q_{\lambda,\ell}^{(i)} \equiv Q_{\lambda,\ell}(\sigma;s_{\pm}(\sigma,y/\sigma^{3}))-Q_{ \lambda,\ell}(s_{i};s_{\pm}(s_{i},y/s_{i}^{3})),\] \[\Delta Q_{\lambda,\ell}^{(0)} \equiv Q_{\lambda,\ell}(\sigma,s_{\pm}(\sigma,y/\sigma^{3}))-Q_{ \lambda,\ell}(\sigma,0),\]
are polynomials. To show the the contact term \(M_{\lambda,\ell}^{(c)}\) are also polynomials, we notice that the symmetry of ut channels allows us to expand \(Q_{\lambda,\ell}(s_{1};s_{2})=\sum_{n+2m\leq l}q_{nm}s_{1}^{n}(s_{2}s_{3})^{m}\), which implies
\[Q_{\lambda,\ell}(\sigma;s_{\pm}(\sigma,y/\sigma))=\sum_{n+2m\leq l}q_{nm} \sigma^{n}(s_{1}/\sigma)^{m}(s_{2}s_{3})^{m}.\]
Thus,
\[\Delta Q_{\lambda,\ell}^{(i)}=\sum_{n,m}q_{nm}\sigma^{n}\left((s_{i}/\sigma)^{ m}-(s_{i}/\sigma)^{n}\right)(s_{i+1}s_{i+2})^{m},\]
where the term \((s_{i}/\sigma)^{m}-(s_{i}/\sigma)^{n}\) must divide \(s_{i}/\sigma-1\) and thus cancel the poles in Eq. (9). More explicitly, we find that
\[\Delta Q_{\lambda,\ell}^{(i)}=(s_{i}-\sigma)\sum_{n,m}P_{n,m}(\sigma)s_{i}^{n}( s_{i+1}s_{i+2})^{m},\]
where
\[P_{n,m}(\sigma)=\begin{cases}\sum_{k=0}^{n}q_{km}\sigma^{k-n-1},&0\leq n<\min (m,\ell-2m+1)\\ \sum_{k=0}^{\ell-2m}q_{km}\sigma^{k-n-1},&\ell-2m\leq n\leq m-1,\\ -\sum_{k=n+1}^{\ell-2m}q_{km}\sigma^{k-n-1},&m\leq n\leq\ell-2m-1.\end{cases}\]
Substituting into Eq. (9), we obtain the contact term
\[M_{\lambda,\ell}^{(c)} =\frac{2}{\sigma}\left(Q_{\lambda,\ell}(\sigma,s_{\pm}(\sigma,y/ \sigma^{3}))-Q_{\lambda,\ell}(\sigma,0)\right)\] \[-\frac{1}{\sigma}\sum_{n,m}P_{n,m}(\sigma)\left(\sum_{i=1}^{3}s_{i }^{n+1}(s_{i+1}s_{i+2})^{m}\right), \tag{10}\]
which are manifestly crossing-symmetric polynomials. Note that the summation over indices \(n\) and \(m\) is across all non-zero \(P(n,m)\) terms, i.e., \(0\leq m\leq\ell/2\) and \(n+2m\leq 3\ell/2-1\).
**Singular block and sum rules:** Since the SF block corresponds to the regular part of the Dyson block, we can decompose
\[M_{\lambda,\ell}^{(D)}(\sigma;\mathbf{s})=M_{\lambda,\ell}^{(SF)}(\sigma; \mathbf{s})+M_{\lambda,\ell}^{(S)}(\sigma;\mathbf{s}),\]
where \(M_{\lambda,\ell}^{(S)}(\sigma;\mathbf{s})\) refers to the corresponding singular part, given by
\[M_{\lambda,\ell}^{(S)} =\frac{Q_{\lambda,\ell}(\sigma,s_{\pm}(\sigma,y/\sigma^{3})-Q_{ \lambda,\ell}(\sigma,s_{\pm}(\sigma,a/(\sigma-a)))}{y/\sigma^{3}-a/(\sigma-a)}\] \[\times\frac{a(2\sigma x-3y)}{\sigma^{4}(\sigma-a)}-\frac{2}{ \sigma}(Q_{\lambda,\ell}(\sigma,s_{\pm}(\sigma,y/\sigma^{3})-Q_{\lambda,\ell}( \sigma,0)).\]
Note since \(Q_{\lambda,\ell}(\sigma,s_{\pm}(\sigma,\eta))\) is polynomial of \(\eta\), the term in the first line is the difference operator acting on \(Q_{\lambda,\ell}\) between \(\eta=y/\sigma^{3}\) and \(\eta=a/(\sigma-a)\), and thus is also a polynomial of these two terms. Therefore, the first term involves positive powers of \(y/x\) except for the zeroth-order term \((y/x)2\sigma x\), which cancels the last term in the above equation. Consequently, only terms with negative powers of \(x\) remain in \(M_{\lambda,\ell}^{(S)}\), as expected.
Since both Dyson and SF blocks lead to the same amplitude, the contribution from the singular part needs to be canceled:
\[\sum_{\ell}\int d\sigma d\lambda f_{\ell}(\sigma,\lambda)M_{\lambda,\ell}^{(S )}(\sigma;\mathbf{s})=0, \tag{11}\]
which imposes a constraint on the spectrum \(f_{\ell}(\sigma,\lambda)\). For instance, for QFT, Equation (11) requires the cancellation of power series contributions of \(\mathcal{B}_{n,m}^{(\ell)}x^{n-m}y^{m}\) with negative powers of \(x\), i.e., \(n<m\), generalizing the Froissart bound [3; 25]. For CFT, it appears to connect to the conformal dispersive sum rules [4; 19]. Unlike previous approaches, Eq. (11) provides a single functional sum rule without involving series expansion.
**Full dispersion relation:** Our approach extends to general scattering amplitudes without assuming the complete crossing symmetry. The corresponding full dispersion relation should link the scattering amplitude \(\mathcal{M}(\mathbf{s})\) to \(s\), \(u\), and \(t\)-channel discontinuities, denoted as \(\mathcal{A}_{i}(\mathbf{s})\) for \(i=1,2,3\). Furthermore, \(\mathcal{M}(\mathbf{s})\) is not merely a function of \(x\) and \(y\), but also of a linear combination of \(s_{i}\). In addition, an antisymmetric part exists [26], characterized in terms of \(w=-(s_{1}-s_{2})(s_{2}-s_{3})(s_{3}-s_{1})\). Note that the algebraic curve \(w^{2}=4x^{3}-27y^{2}\) suggests that any power of \(w\) higher than first order will be absorbed in a combination of \(x\) and \(y\). Using an approach similar to the one presented above, we derive the full SF dispersion relation:
\[\mathcal{M}(\mathbf{s})=\alpha_{0}+\sum_{i=1}^{3}\alpha_{i}s_{i} +\frac{1}{2\pi}\sum_{i=1}^{3}\int\frac{d\sigma}{\sigma}\frac{K_{i}^{+}( \mathbf{s},\sigma)\mathcal{A}_{i}\left(\sigma,\tilde{s}_{+}\right)+K_{i}^{-}( \mathbf{s},\sigma)\mathcal{A}_{i}\left(\sigma,\tilde{s}_{-}\right)}{(\sigma- s_{1})(\sigma-s_{2})(\sigma-s_{3})} \tag{12}\] \[-K_{i}^{0+}(\mathbf{s},\sigma)\mathcal{A}_{i}\left(\sigma,s_{+}( \sigma,0)\right)-K_{i}^{0-}(\mathbf{s},\sigma)\mathcal{A}_{i}\left(\sigma,s_{- }(\sigma,0)\right),\]
where \(\tilde{s}_{\pm}\equiv s_{\pm}(\sigma,y/\sigma^{3})\), \(K_{i}^{0\pm}(\mathbf{s},\sigma)=\frac{2}{3}+\frac{s_{i}}{\sigma}\pm\frac{s_{i +1}-s_{i+2}}{3\sigma}\), and
\[K_{i}^{\pm}(\mathbf{s},\sigma) =\left((2\sigma^{3}-y)\pm\frac{\sigma w}{\tilde{s}_{+}-\tilde{s}_{ -}}\right)\left(\frac{1}{3}+\frac{\sigma^{2}s_{i}}{2(y+\sigma^{3})}\right)\] \[+\left(\sigma^{2}w\pm\frac{(2\sigma^{3}-y)(4y+\sigma^{3})}{ \tilde{s}_{+}-\tilde{s}_{-}}\right)\frac{s_{i+1}-s_{i+2}}{6(y+\sigma^{3})}.\]
The constants \(\alpha_{i}\) correspond to the first-order coefficient of \(s_{i}\), with only two being free, enabling us to impose \(\sum_{i=1}^{3}\alpha_{i}=0\). The corresponding SF blocks can be found consequently. The full dispersion relation (12) is substantially more involved.
While the full dispersion relation (12) is considerably more involved, it simplifies remarkably for the crossing symmetric and symmetric cases. In the former scenario, the discontinuities across all channels are identical, i.e., \(\mathcal{A}_{i}(\sigma,\tilde{s}_{\pm})=\mathcal{A}(\sigma,\tilde{s}_{\pm})\). Equation (12) reduces to Eq. (5) since all terms cancel after summation except for \((2\sigma^{3}-y)/3\). Likewise, for the crossing antisymmetric case [26], Equation (12) simplifies to
\[\mathcal{M}^{(as)}(\mathbf{s})=\frac{w}{\pi}\int\frac{d\sigma}{(\sigma-s_{1})( \sigma-s_{2})(\sigma-s_{3})}\frac{\mathcal{A}(\sigma,\tilde{s}_{+})}{\tilde{s} _{+}-\tilde{s}_{-}},\]
which provides the crossing antisymmetric dispersion relation. It is noteworthy that in this case \(\mathcal{A}(\sigma,\tilde{s}_{+})=-\mathcal{A}(\sigma,\tilde{s}_{-})\), thus \(\frac{\mathcal{A}(\sigma,\tilde{s}_{+})}{\tilde{s}_{+}-\tilde{s}_{-}}=\frac{1}{ 2}\frac{\mathcal{A}(\sigma,\tilde{s}_{+})-\mathcal{A}(\sigma,\tilde{s}_{-})}{ \tilde{s}_{+}-\tilde{s}_{-}}\) is a polynomial in terms of \(\tilde{s}_{+}\) and \(\tilde{s}_{-}\), as expected for _odd_ spin contributions.
**Discussion:** The singularity-free, crossing-symmetric dispersion relation approach introduced in this paper addresses a long-standing challenge in the nonperturbative exploration of quantum field theories. The proposed crossing-symmetric blocks seamlessly link to Feynman/Witten diagrams, with contact terms being explicitly determined. The null contribution of the singular block leads to a simplified functional sum rule, enhancing existing methods. Furthermore, the full singularity-free dispersion relation lays the groundwork for the Polyakov
bootstrap beyond identity operators. This approach also provides remarkable opportunities for numerical S-matrix bootstrap within a broader setup. Undoubtedly, our advancements establish a robust foundation for crossing-symmetric bootstrap applicable to both QFTs and CFTs.
###### Acknowledgements.
We express our gratitude to Aninda Sinha and Ahmadullah Zahed for their invaluable discussions and constructive feedback on the manuscript.
| 近年、1970年代からのクロスシメトリー分散関係に対する関心の再燃が高まりました。これは、正規量子場理論と共形場理論の両方の適用に影響を与えるからです。しかしながら、この分散関係は非局所的な誤った奇異点をもたらし、それらの除去には局所性制限を追加する必要があり、これは技術的な課題をもたらします。この論文では、この問題に対処するために、誤った奇異点のない新しいクロスシメトリー分散関係を導き出し、クロスシメトリーブロックにおける接触項のコンパクトな形式をもたらしました。私たちの結果は、共形場理論におけるポリakovBootstrapの基礎を確立し、量子場理論におけるクロスシメトリーS-マトリックスBootstrapを確立しています。 |
2307.15672 | Bayesian Time-Series Classifier for Decoding Simple Visual Stimuli from
Intracranial Neural Activity | Understanding how external stimuli are encoded in distributed neural activity
is of significant interest in clinical and basic neuroscience. To address this
need, it is essential to develop analytical tools capable of handling limited
data and the intrinsic stochasticity present in neural data. In this study, we
propose a straightforward Bayesian time series classifier (BTsC) model that
tackles these challenges whilst maintaining a high level of interpretability.
We demonstrate the classification capabilities of this approach by utilizing
neural data to decode colors in a visual task. The model exhibits consistent
and reliable average performance of 75.55% on 4 patients' dataset, improving
upon state-of-the-art machine learning techniques by about 3.0 percent. In
addition to its high classification accuracy, the proposed BTsC model provides
interpretable results, making the technique a valuable tool to study neural
activity in various tasks and categories. The proposed solution can be applied
to neural data recorded in various tasks, where there is a need for
interpretable results and accurate classification accuracy. | Navid Ziaei, Reza Saadatifard, Ali Yousefi, Behzad Nazari, Sydney S. Cash, Angelique C. Paulk | 2023-07-28T17:04:06 | http://arxiv.org/abs/2307.15672v1 | Bayesian Time-Series Classifier for Decoding Simple Visual Stimuli from Intracranial Neural Activity
###### Abstract
Understanding how external stimuli are encoded in distributed neural activity is of significant interest in clinical and basic neuroscience. To address this need, it is essential to develop analytical tools capable of handling limited data and the intrinsic stochasticity present in neural data. In this study, we propose a straightforward Bayesian time series classifier (BTsC) model that tackles these challenges whilst maintaining a high level of interpretability. We demonstrate the classification capabilities of this approach by utilizing neural data to decode colors in a visual task. The model exhibits consistent and reliable average performance of 75.55% on 4 patients' dataset, improving upon state-of-the-art machine learning techniques by about 3.0 percent. In addition to its high classification accuracy, the proposed BTsC model provides interpretable results, making the technique a valuable tool to study neural activity in various tasks and categories. The proposed solution can be applied to neural data recorded in various tasks, where there is a need for interpretable results and accurate classification accuracy.
Keywords:Bayesian analysis Neural decoding Interpretable modeling.
## 1 Introduction
Neuroscientists have long sought methods to decode neural activity in hopes of restoring movement and communication for individuals with neurological injuries [28], including stroke, spinal cord injury, brain trauma, and neurodegenerative diseases (e.g., Amyotrophic Lateral Sclerosis, ALS) using brain-computer interfaces (BCIs). Significant progress has been made in motor control [15], with advances also seen in the realms of speech [2], mood, and decoding neural activity corresponding to visual stimuli [11, 14, 16]. Despite these discoveries, BCIs remain primarily in research endeavors, facing hurdles in terms of costs, risks, and
technological challenges [36]. The critical components of a BCI system involve feature extraction and accurate classification of neural activity related to different tasks or sensory input. However, several challenges exist in achieving highly accurate classifiers, including selecting the most informative and well-reasoned neural features and developing an interpretable classifier capable of utilizing limited datasets [3]. An interpretable classifier elucidates key neural signal features, like specific frequency bands or time periods, crucial for task performance. This insight enhances our understanding of the neural mechanisms. Addressing these challenges is further complicated by the prohibitively large number of features used to decode neural activity. Features might encompass raw neural data, which include the measured voltage from single or multiple electrode contacts in non-invasive techniques such as electroencephalography (EEG), and invasive methods like intracranial EEG (iEEG) or blood-oxygen-level-dependent (BOLD) signal in functional MRI. Many researchers have suggested a variety of features derived from these neural recordings. These could range from simple statistical metrics of the signal in time or frequency domain, like mean, median, standard deviations, kurtosis, and skewness to more sophisticated features such as Common Spatial Patterns (CSPs) [7], Higher-Order Crossing (HOC) features [22], Hjorth features [20], and Auto-Regressive (AR) coefficients [33]. As the EEG signal is non-stationary [34], time-frequency domain methods such as wavelet transform (WT) have been used for feature extraction as well [23]. In addition to the types of features used, there can be multiple streams of the same data represented by different electrode channels. Many researchers employ data reduction techniques to handle this redundancy of information, such as Principal Component Analysis (PCA), Independent Component Analysis (ICA), or linear discriminant analysis (LDA) [32]. Another approach in data analysis focuses on selecting the most informative channels relative to the target classification task. This can be achieved through channel selection techniques, which can be categorized as either classifier-dependent (including wrapper and embedded-based methods) or classifier-independent (such as filter-based methods) [1]. In summary, different neural activity features can yield varied inferences, not always enhancing the understanding of encoding mechanisms. Thus, refining feature extraction and selection are essential for deriving relevant information that deepens our comprehension of brain processes. Once features are chosen, classification commences, with considerable recent work involving deep-learning-based classifier models. While deep learning frameworks such as Convolutional Neural Networks [9] and Recurrent Neural Networks [25] have been applied to the decoding and classification of EEG signals to identify stimuli [5], these methods might not be as applicable to invasive neural recordings like iEEG. This is primarily due to the limited availability of iEEG data compared to non-invasive EEG data. The drawbacks of these models include, 1) lack of interpretability, which corresponds to difficulty in identifying the necessary features for classification in many deep learning models, and 2) the requirement of vast amounts of data in the training phase of these models, which may not always be possible to collect when dealing with invasive recordings like iEEG [26]. Consequently, the limited data in inva
sive neural recordings necessitates the exploration of alternative methods that can effectively handle such constraints while maintaining accurate classification and interpretability. In this study, we propose the Bayesian Time-Series Classifier (BTsC) model, designed to address the above-mentioned challenges. The BTsC allows us to identify the minimum number of channels necessary for stimulus classification from neural data. Furthermore, it enables the selection of neural features from different electrodes that provide optimal classification power, as well as the determination of the precise time period following a stimulus required for accurate stimulus classification. The proposed model can be trained effectively with limited data by leveraging the dynamics of local field potential (LFP) signals in two frequency subbands. Our proposed BTsC model employs a wrapper-based technique and greedy search in selecting channels and features for optimal classification. The pipeline of feature selection and classification used in the model can be applied to other classifiers, including Support Vector Machine (SVM) [29], Long Short-Term Memory (LSTM) [10], Naive Bayes [24], etc. We applied this model to decode the presence of one or the other visual stimulus using LFP neural activity from multiple neural nodes' activity, where our model shows high accuracy in classifying simple stimuli. In the following, we first outline our method of feature extraction from LFP signals and the development of the BTsC for stimulus decoding. Next, we assess the BTsC model's performance and compare it with other machine learning models. Lastly, we discuss our findings' implications for brain information processing and the potential applications of our model.
## 2 Material and Methods
### Dataset and Behavioral Task
Four participants performed the visual stimulus task, while LFP was recorded from 278 sites via intracranial stereo EEG (Fig. 1-A). Intracranial EEG recordings were made over the course of clinical monitoring for spontaneous seizures. Participants were implanted with multi-lead depth electrodes (also known as stereotactic EEG, sEEG) [6]. All patients voluntarily participated after fully informed consent according to NIH guidelines, as monitored by the Massachusetts General Brigham (formerly Partners) Institutional Review Board (IRB). Participants were informed that participation in the tests would not alter their clinical treatment and that they could withdraw at any time without jeopardizing their clinical care. The Flicker task consisted of 100 trials where participants were presented with a red fixation cross for 0.5 seconds on a grey background (Fig. 1-B). This was followed by the appearance of a single black or white square on the same grey background. The duration of the square varied randomly between 2 to 4 seconds. The color of the square for each trial was randomly selected from a sequence of black or white with equal chance. Participants were instructed to focus on the red fixation cross and count the number of black or white squares presented to enhance engagement.
### Data Preprocessing: Intracranial LFP Data
Data analysis was performed using custom analysis code in MATLAB and Field-trip, an open-source software implemented in MATLAB [21]. All data were subsequently decimated to 1000Hz, demeaned relative to the entire recording, and line noise and its harmonics up to 200Hz were removed by subtracting the band-passed filtered signal from the raw signal. Channels with excessive line noise or which had no discernible signal were removed from the analysis. In addition, we removed pathological channels with interictal epileptiform discharges (IEDs) using an automatic IED detection algorithm [13] (version v21, default settings except -h at 60; [http://isarg.fel.cvut.cz](http://isarg.fel.cvut.cz)). We removed channels that had detected IEDs greater than 6.5 IEDs/minute The remaining electrodes were subsequently bipolar re-referenced relative to nearest neighbors to account for volume conduction.
### Extracting Neural Features for Decoding
We focus on two categories of features known to encode stimulus information, namely the low-frequency event-related potentials (ERPs) and the high gamma power (HGP) following image onset (see Fig. 1-C). For the ERP neural features, we filter the LFP in the theta (3-7 Hz) and delta (0-3 Hz) frequency bands [35]
Figure 1: Data Overview: (A) shows the electrode placement in four participants, (B) illustrates the Flicker Task paradigm performed in 100 trials, (C) outlines the steps for feature extraction, (D) displays the preprocessed single trial signal (top), ERP features (middle), and HGP features (bottom) extracted from the LTP02-LTP03 electrode for patient P04, and (E) presents a scatter plot of ERP (top) and HGP (bottom) features at times t1 and t2 for all trials recorded from the same electrode, indicating a Gaussian distribution.
using a low-pass filter with a cut-off frequency of 7 Hz. By applying the low-pass filter, we conform to the Nyquist-Shannon sampling theorem, which allows us to down-sample the filtered data to 15 Hz without losing crucial information. This results in 15 features (each feature being an individual time step) for ERP signals per electrode. Each sample of the ERP feature vector includes a weighted average of multiple samples of the original time series data in a specific time interval. Under the central limit theorem assumption, we can assume that each element of these vectors follows a normal distribution. As HGP has also been shown to encode visual stimuli [19, 35], we band-pass filter the LFP from 65 to 120 Hz. Power is then calculated in 67-millisecond nonoverlapping windows after stimulus onset, generating the same number of features per each channel as the ERP. A subsequent step involves using a log transformation. This transformation compresses the range of high values and expands the range of low values, resulting in a distribution that is more symmetric and closer to a normal distribution. This justifies its use as an input feature vector to our model [4]. In the procedure described, each LFP recording channel results in two feature vectors, one for ERP and one for HGP, each represented as a time series. We model these vectors as multivariate normal distributions for use in the BTsC model, which we'll discuss further in the next section.
### Bayesian Time-series Classifier (BTsC)
In section 2.3, we described how ERP and HGP feature vectors are acquired from each LFP recording channel. To simplify the description, we assume that each channel is associated with a single feature vector, and we refer to the classifier trained on this feature vector as a single-channel classifier. This simplification does not compromise the broader applicability of the model. To present the BTsC model, we start by detailing the single-channel classifier's construction and the process of determining optimal time periods. We then discuss combining single-channel classifiers into a multi-channel classifier and selecting the minimum subset of classifiers needed for maximum accuracy. The BTsC aims to determine the fewest feature vectors necessary for effective stimulus classification. Single-channel classifier. Let us assume that the pre-processed neural recording for the \(c^{th}\) electrode is represented by the feature vector defined by \(\mathbf{x}^{c}=\{\mathbf{x}_{1}^{c},\mathbf{x}_{2}^{c},\ldots,\mathbf{x}_{d}^{ c}\}\), where \(d\) is the length of observation period, and \(\mathbf{x}_{i}{}^{c}\) is the \(i^{th}\) sample of the feature vector for the \(i^{th}\) time interval after the stimulus onset. As discussed in section 2.3, we assume that each element of \(\mathbf{x}^{c}\) follows a normal distribution. We further assume the joint distribution of \(\mathbf{x}^{c}\) follows a multivariate normal, where the dependency across time points allows us to characterize the temporal dynamics of observed neural features. For the model, we build the conditional distribution of \(\mathbf{x}^{c}\) given stimulus, \(I\in\{0,\ldots,k\}\equiv\{(stimulus1,...,stimulusK)\}\). The conditional distribution of \(\mathbf{x}^{c}\) is defined by:
\[\mathbf{x}^{c}|I\sim\mathcal{N}(\mu_{I}^{c},\mathbf{\Sigma}_{I}^{c}) \tag{1}\]
\[p(\mathbf{X}=\mathbf{x}^{c}|I)=\frac{1}{(2\pi)^{d/2}|\mathbf{\Sigma}_{I}^{c}| ^{1/2}}\exp\left(-\frac{1}{2}(\mathbf{x}^{c}-\mu_{I}^{c})^{T}\mathbf{\Sigma}_ {I}^{c\,-1}(\mathbf{x}^{c}-\mu_{I}^{c})\right) \tag{2}\]
where \(\mu_{I}^{c}\) and \(\mathbf{\Sigma}_{I}^{c}\) are the mean and covariance of \(\mathbf{x}^{c}\) given stimuli \(I\). Given \(\mu_{I}\) and \(\mathbf{\Sigma}_{I}\), we can construct a Bayesian classifier to compare the posterior probabilities of different stimuli. The assigned class is the one with the highest posterior probability, defined by:
\[\forall j\neq k:L(I=k\mid\mathbf{X}=\mathbf{x}^{c})\geq L(I=j\mid\mathbf{X}= \mathbf{x}^{c}) \tag{3}\]
\(L\left(I\mid\mathbf{X}=\mathbf{x}^{c}\right)\) corresponds to the posterior distribution of stimulus \(I\), given the observed neural features, defined by:
\[L\left(I\mid\mathbf{X}=\mathbf{x}^{c}\right)\propto p(\mathbf{X}=\mathbf{x}^{ c}\mid I)p(I) \tag{4}\]
where \(p(I)\) is the stimulus prior probability. To build our single-channel classifier, we require only the mean and covariance of each neural recording (\(\mathbf{x}^{c}\)) per stimulus. Using the training dataset \(\mathcal{D}\), we find the mean and covariance matrix for each time series. To obtain a robust estimation of the covariance matrix, the number of required samples must be on the order of \(d^{2}\). Estimating the covariance matrix can result in a biased estimation given the limited training data [18]. Our solution is to find the minimum length of the observation (\(d_{minimal}^{c}\)) starting from the onset, providing the highest accuracy with the cross-validated result. Using this approach, we can address the limited training dataset in our estimation of the covariance matrix. In the case of the multivariate normal distribution, we can obtain the marginal distribution of a subset of the neural data; any marginalized distribution remains a multivariate normal. With this in mind, we can examine the posterior of each class of data as a time-dependent function, identifying the stimulus from a subset of neural data \(\mathbf{x}^{c}\). We denote this posterior as \(L_{j}\), signifying a marginalized version of the overall posterior distribution \(L\), but only considering the first \(j\) features \(\{\mathbf{x}_{1}^{c},\mathbf{x}_{2}^{c},\ldots,\mathbf{x}_{j}^{c}\}\). We introduce the concept of \(C_{j}(\mathcal{D})\), representing the cross-validated classification accuracy of our model on the dataset \(\mathcal{D}\), using a marginalized posterior distribution with the first \(j\) features. For each classifier, the minimal time, denoted as \(d_{minimal}^{c}\), is defined as follows:
\[d_{minimal}^{c}=\arg\max_{j}C_{j}(\mathcal{D}) \tag{5}\]
This suggests that \(d_{minimal}^{c}\) represents the smallest set of features necessary to optimize our model's performance, in accordance with the constraints of the k-fold cross-validation method being used.
#### 3.2.2 Multi-channel Classifier.
We construct our BTsC model initially based on a single-channel classifier, which turns to Quadratic Discriminant Analysis (QDA) for time series [31]. In practice, we have multiple channels, with each channel having two feature vectors. Classifiers trained on these feature vectors can be combined to achieve higher classification accuracy. Equation (4) defines the classifier model for the \(c^{th}\) feature vector. We found that the single-channel classifier accuracy is limited, and in the best case, it is about or less than 75%. To attain
higher classification accuracy, we expand our single-channel QDA classifier to account for multiple channels. We employ two solutions to adapt the classifier for multi-channel neural recordings, resulting in an ensemble-based classifier that enhances accuracy and robustness. The first solution expands the model directly to multiple channels. The second solution is based on the majority voting rule of \(C\) different classifiers, where \(C\) is the number of possible channels.
Multi-channel Likelihood Method.For a multi-channel classifier, we assume that different feature vectors are conditionally independent given the stimulus. The joint conditional distribution of all recordings is defined by:
\[p(\mathbf{X}_{1}=\mathbf{x}^{1},\ldots,\mathbf{X}_{C}=\mathbf{x}^{C}|I)=p( \mathbf{X}_{1}=\mathbf{x}^{1}|I)\ldots p(\mathbf{X}_{C}=\mathbf{x}^{C}|I) \tag{6}\]
Where \(I\) represents the stimulus. We can construct each single-channel model similar to the one defined in Equation (2). The posterior distribution for each multi-channel neural recording is defined by:
\[L(I;\mathbf{X}_{1}=\mathbf{x}^{1},\ldots,\mathbf{X}_{C}=\mathbf{x}^{C})\propto p (\mathbf{X}_{1}=\mathbf{x}^{1}|I)\ldots p(\mathbf{X}_{C}=\mathbf{x}^{C}|I)p(I) \tag{7}\]
Utilizing all channels in the model is not practical as some may lack informative features for classification. Also, multiplying likelihoods from all channels can complicate computation due to smaller values. Hence, identifying the most informative channels through channel subset selection is necessary for accurate classification. We will discuss this challenge in the classification subset selection section.
Maximum Voting Method.The outcomes of single-channel classifiers can be combined using the voting method to achieve higher classification accuracy. In a poll of \(C\) single-channel classifiers, each classifier contributes a vote towards a class label based on its independent prediction. The class which receives the most votes, representing the majority opinion among the classifiers, is selected as the final output class [31].
#### 2.0.2 Classifiers Subset Selection.
We can combine single-channel classifiers to construct a multi-channel classifier using either of these two methods. The process of selecting the optimal subset of feature vectors is based on an adaptive (or greedy) search. It begins with a single channel with the best performance using k-fold cross-validation and then examines which other channel can be added to it. All possible combinations are evaluated, and the one with the best performance is selected. Through this adaptive search, the minimal number of channels that reach the highest classification accuracy with the highest confidence can be determined with cross-validation.
## 3 Results
The BTsC model can be applied to different mental tasks with various features. Here, we investigated the application of BTsC in the visual task described in
section 2.1. We trained the BTsC using low-frequency (ERP, \(<7Hz\)) and high-frequency power (HGP; 65-120 Hz) dynamics following image onset to test if we can decode visual stimuli from across the brain at a high temporal resolution (67 ms). Then, we identified the features and time points for maximum decoding accuracy (Fig. 2). Furthermore, to validate the results obtained with the BTsC model, we compared them with the decoding outcomes of seven additional machine learning (ML) algorithms on visual stimuli. Optimal Stimulus Decoding Time and Features in the Model Following the model subset selection, we discovered that the channels and features that survived the selection criteria and contributed to the BTsC models were from multiple electrodes across multiple brain regions. BTsC enabled us to evaluate the performance of each feature vector, ERP and HGP, on individual channels and to determine the optimal timing post-image onset for superior performance. From this analysis, we discerned which regions and which features exhibited the fastest responses. Additionally, we found that combining these feature vectors leads to a boost in performance (Fig. 2-B). Upon analyzing the BTsC results for ERP, HGP, and their combined utilization, we discovered that leveraging both ERP and HGP features enhances the decoding model's accuracy in the visual task (Fig. 2-C). After identifying the time window after image onset with the peak accuracy for the single time or cumulative BTsC models, we found that the time of maximum accuracy after image onset was below 0.8 seconds. The accuracy at each time point and the number of utilized feature vectors are depicted in Fig. 2-D.
Figure 2: Results: (A) illustrates the performance of individual classifiers. (B) shows accuracy evolution during channel combination steps for participant P05. (C) shows the comparison between the BTsC performance using ERP, HGP, and both ERP and HGP features to assess the impact of neural features. (D) displays the accuracy of the model at different time points for participants P01 to P05.
### Machine Learning Decoding
To test if decoding results, which support the distributed information flow hypothesis [27] are particular to the BTsC model, we applied seven machine learning models to the same neural features (ERP and HGP) and participant data. The machine learning classifiers include SVM [29], Logistic Regression (LR) [12], Naive Bayes (NB) [24], Random Forest (RF) [12], Multi-Layer Perceptron (MLP) [8], LSTM [10] and EEGNet [17].
## 4 Discussion
Our investigation has yielded significant insights into the effectiveness of the BTsC in decoding visual stimuli from intracranial EEG recordings. The BTsC model's integration of the ERP and HGP features has demonstrated a remarkable capacity for classifying visual stimuli, outperforming other machine learning models including SVM, Logistic Regression, Naive Bayes, Random Forest, MLP, LSTM, and EEGNet. By leveraging the BTsC model, we achieved an average accuracy of 75.55% in classifying stimuli (Table 1). This is a noteworthy outcome, particularly given the inherent complexity of neural data, inter-subject variability in iEEG electrode position, and the challenges associated with decoding neural signals. Our results show that the BTsC model can effectively capture the distributed information flow within the brain, with its simplicity offering robustness against limited training data. In comparison, other methods face challenges such as overfitting, interpretability, and scalability issues. For example, complex models like EEGNet can result in overfitting due to their complex architecture, making them less reliable when dealing with limited data [5].
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{2}{c}{**P01**} & \multicolumn{2}{c}{**P02**} & \multicolumn{2}{c}{**P03**} & \multicolumn{2}{c}{**P04**} \\ \cline{2-9}
**Model** & Acc. & F1 & Acc. & F1 & Acc. & F1 & Acc. & F1 \\ \hline BTsC (L) & 74.6(5) & **75.2(8)** & **73.7(4)** & **71.0(9)** & 66.5(7) & 62.0(8) & 85.4(7) & 83.1(3) \\ BTsC (V) & **75.1(9)** & 74.3(7) & 70.6(6) & 69.8(9) & 66.4(10) & 67.3(7) & **90.0(6)** & **89.1(4)** \\ EEGNet & 72.0(7) & 73.2(9) & 68.1(7) & 70.1(11) & **67.4(9)** & **68.1(9)** & 85.3(10) & 84.2(8) \\ SVM & 61.2(13) & 65.2(8) & 70.0(14) & 73.0(9) & 58.5(9) & 51.0(10) & 81.2(7) & 81.1(10) \\ NB & 70.0(6) & 69.4(8) & 68.7(8) & 70.7(8) & 51.1(11) & 54.0(18) & 74.8(16) & 70.9(9) \\ RF & 60.0(6) & 58.5(3) & 62.5(3) & 63.0(3) & 55.2(4) & 56.1(8) & 72.9(6) & 71.2(5) \\ MLP & 67.8(9) & 63.2(6) & 67.7(11) & 68.1(9) & 58.5(10) & 55.9(10) & 71.9(8) & 72.5(7) \\ LR & 66.2(11) & 66.0(8) & 67.5(7) & 66.8(13) & 53.1(5) & 46.5(5) & 75.0(12) & 76.1(10) \\ LSTM & 64.6(8) & 63.2(6) & 67.7(7) & 64.2(9) & 52.7(9) & 50.0(11) & 70.0(7) & 71.9(11) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance Comparison across Different ML Techniques: The table depicts the mean and standard deviations (presented in parentheses) of 5-fold cross-validation accuracy for all models, including BTsC (L) and BTsC (V), where (L) and (V) represent combination methods using likelihood and voting, respectively.
On the other hand, simpler models like Naive Bayes rely on the assumption of features' independence, which is often unrealistic in real-world applications. The key feature of our model is its interpretability, essential for validating predictions and understanding brain functions [37]. Our BTsC model offers a clear view of which areas of the brain are encoding stimuli and at what time these features exhibit the most discriminative power. Additionally, this model possesses flexibility in utilizing diverse feature vectors and can provide insights into the specific contributions of each vector to the final outcome. In our study, we leveraged both ERP and HGP dynamical features individually and in combination within the BTsC. While both ERPs and HGP independently provided significant information for decoding visual stimuli, we found that combining these features led to a marked increase in classification accuracy. This suggests that these neural features, while independently informative, offer complementary information that enhances the decoding power when combined, reflecting the complex and distributed nature of brain processing [30]. Thus, the integration of multiple neural features could contribute to more accurate and robust models for neuroscience applications. Further expanding our research, we conducted an experiment to determine the most informative time period after image onset for training the BTsC model. This aspect is crucial, as it helps establish the optimal window for capturing the most discriminative neural features. In this experiment, we trained the BTsC model using different time windows post-image onset. Consequently, we determined the optimal time window post-image onset for training the BTsC model for each patient (Fig. 2-D). Moreover, this experiment revealed that in shorter time windows, HGP feature vectors are more involved than ERP. Despite the encouraging results, our model has limitations. The assumption of channels' independence is a significant limitation, which we intend to address in future iterations. We plan to refine our model to account for possible dependencies and correlations between features and electrodes, which could further enhance the predictive accuracy of the BTsC model.
## 5 Conclusion
In this study, we introduced a novel Bayesian Time-series Classifier that utilizes both low-frequency event-related potentials and high gamma power features to effectively decode visual stimuli from intracranial EEG recordings. The BTsC model identifies encoding brain areas, discriminative features, and optimal time windows, outperforming other classifiers by 3% in accuracy due to its ability to capture distributed information flow. With its demonstrated success in decoding simple visual information and accommodating individual variations, this model holds promise for applications in neuroscience and clinical settings, including brain-computer interfaces and neural prosthetics. Future research will broaden the scope to include more cognitive tasks and modalities, personalize neurotechnology through additional neural features, and explore the impact of different covariance structures on our Bayesian Time-Series Classifier model. | |
2303.16854 | AnnoLLM: Making Large Language Models to Be Better Crowdsourced
Annotators | Many natural language processing (NLP) tasks rely on labeled data to train
machine learning models with high performance. However, data annotation is
time-consuming and expensive, especially when the task involves a large amount
of data or requires specialized domains. Recently, GPT-3.5 series models have
demonstrated remarkable few-shot and zero-shot ability across various NLP
tasks. In this paper, we first claim that large language models (LLMs), such as
GPT-3.5, can serve as an excellent crowdsourced annotator when provided with
sufficient guidance and demonstrated examples. Accordingly, we propose AnnoLLM,
an annotation system powered by LLMs, which adopts a two-step approach,
explain-then-annotate. Concretely, we first prompt LLMs to provide explanations
for why the specific ground truth answer/label was assigned for a given
example. Then, we construct the few-shot chain-of-thought prompt with the
self-generated explanation and employ it to annotate the unlabeled data with
LLMs. Our experiment results on three tasks, including user input and keyword
relevance assessment, BoolQ, and WiC, demonstrate that AnnoLLM surpasses or
performs on par with crowdsourced annotators. Furthermore, we build the first
conversation-based information retrieval dataset employing AnnoLLM. This
dataset is designed to facilitate the development of retrieval models capable
of retrieving pertinent documents for conversational text. Human evaluation has
validated the dataset's high quality. | Xingwei He, Zhenghao Lin, Yeyun Gong, A-Long Jin, Hang Zhang, Chen Lin, Jian Jiao, Siu Ming Yiu, Nan Duan, Weizhu Chen | 2023-03-29T17:03:21 | http://arxiv.org/abs/2303.16854v2 | # AnnoLLM: Making Large Language Models to Be Better Crowdsourced Annotators
###### Abstract
Many natural language processing (NLP) tasks rely on labeled data to train machine learning models to achieve high performance. However, data annotation can be a time-consuming and expensive process, especially when the task involves a large amount of data or requires specialized domains. Recently, GPT-3.5 series models have demonstrated remarkable few-shot and zero-shot ability across various NLP tasks. In this paper, we first claim that large language models (LLMs), such as GPT-3.5, can serve as an excellent crowdsourced annotator by providing them with sufficient guidance and demonstrated examples. To make LLMs to be better annotators, we propose a two-step approach, 'explain-then-annotate'. To be more precise, we begin by creating prompts for every demonstrated example, which we subsequently utilize to prompt a LLM to provide an explanation for why the specific ground truth answer/label was chosen for that particular example. Following this, we construct the few-shot chain-of-thought prompt with the self-generated explanation and employ it to annotate the unlabeled data. We conduct experiments on three tasks, including user input and keyword relevance assessment, BoolQ and WiC. The annotation results from GPT-3.5 surpasses those from crowdsourced annotation for user input and keyword relevance assessment. Additionally, for the other two tasks, GPT-3.5 achieves results that are comparable to those obtained through crowdsourced annotation.
## 1 Introduction
Labeled data refers to a dataset that has been manually annotated with predefined target labels or categories. It is crucial to develop machine learning models for many NLP tasks, such as sentiment analysis Socher et al. (2013), machine translation Sutskever et al. (2014) and word sense disambiguation He and Yi (2022). The process of labeling data is typically done by human annotators who are given specific guidelines and criteria on how to assign labels to each instance in the dataset. For example, in sentiment analysis, each sentence or document may be labeled with a polarity score such as "positive", "negative", or "neutral". However, it is very labor-intensive and time-consuming to create a large dataset with human annotation, which limits the availability of such data and the applicability of machine learning models in various NLP tasks.
Previous works have demonstrated that large-scale pre-trained language models (LLMs), such as GPT-3 Brown et al. (2020) and PaLM Chowdhery et al. (2022), achieve impressive results in various downstream tasks without collecting large-scale task-specific data or tuning model parameter but only with a few examples as instructions. OpenAI has recently launched the GPT-3.5 series, which are the upgraded versions of GPT-3, and trained on a blend of text and code published before the end of 2021. In the meanwhile, OpenAI also unveiled ChatGPT, another fine-tuned version of GPT-3.5. Within just two months since its release, ChatGPT has garnered a massive following of 100 million users worldwide, garnering significant global attention.
Wang et al. (2021) showed that augmenting manually labeled data with pseudo-labeled data from GPT-3 could enhance the performance of models, particularly when the labeling budget is restricted. However, the quality of GPT-3's labeled data still lags behind that of manually labeled data. Considering the GPT-3.5 model's remarkable few-shot and zero-shot capabilities across various tasks and the expensive nature of manual annotation, we raise an essential and significant inquiry: Can GPT-3.5 potentially replace crowdsourced annotators?
Before answering this question, let us go over
the process of crowdsourced data annotation. First, we need to provide the annotators with a specific definition of the task. Then, for classification tasks, we need to tell the annotators the specific meanings of each category. Finally, we need to provide the annotators with a few examples that have already been annotated as references. Naturally, we can guide GPT-3.5 to annotate data using the same approach as with human annotators by providing it with task definitions and example samples. Furthermore, we found that requesting a LLM to furnish the rationale behind the ground truth label or answer for a particular example can prompt the LLM to produce high-quality explanations. Based on this, we construct the few-shot chain-of-thought prompt Wei et al. (2022) with the self-generated explanations to annotate data. We refer to this method as 'explain-then-annotate', which further improves the annotation quality.
We summarize our contributions as follows:
(1) For the first time, we propose that AnnoLLM, a **Annot**ation system powered by **L**arge **L**anguage **M**odels, can replace crowdsourced annotators to annotate data.
(2) To enhance the data annotation capabilities of LLMs, we suggest a two-step approach called 'explain-then-annotate'. In this approach, we leverage ChatGPT to generate a few-shot chain-of-thought prompt, which we then use to annotate unlabeled data.
(3) Our results on three datasets verify the feasibility of substituting crowdsourced annotators with GPT-3.51, where it either surpasses or matches crowdsourced annotators.
Footnote 1: In this paper, we focus on using GPT-3.5 series models to annotate data for classification tasks.
## 2 Approach
Providing detailed instructions to annotators is crucial when using crowdsourcing to annotate data, as it can help crowd workers better understand task requirements and annotation standards, ultimately improving the quality and accuracy of annotated data. The instructions for each task mainly includes three parts: task description, category definition, and demonstrated examples.
Motivated by the guidance to human annotators, we will introduce how to convert GPT-3.5 into a zero-shot data annotator by providing guidance on the task description and category definitions in Section 2.1. Then, we will show how to transform GPT-3.5 into a few-shot data annotator with the demonstrated examples in Section 2.2. We show the crowdsourcing annotation and our proposed 'explain-then-annotate' processes in Figure 1.
### GPT-3.5 as a Zero-shot Data Annotator
In the zero-shot setting, we can only provide the task description and category definitions to annotators. The task description includes information on task definition, task purpose, and so on. Category definitions require clear definitions for each category, so that the crowd workers can understand the meaning and standard of each category.
In the same vein, we furnish GPT-3.5 with the task description and category definitions, which enable GPT-3.5 to function as a zero-shot data annotator. We show the zero-shot prompts for GPT-3.5 on the user query and keyword relevance assessment, WiC and BoolQ tasks in Tables 10, 11 and 12, respectively.
### GPT-3.5 as a Few-shot Data Annotator
When annotating data, providing annotation examples for each category to annotators can help crowd workers better understand how to annotate and classify the data accurately. Similarly, we also provide the demonstrated examples to GPT-3.5, thus changing it into a few-shot data annotator. We show the few-shot prompts for GPT-3.5 on the user query and keyword relevance assessment, WiC and BoolQ tasks in Tables 13, 14 and 15, respectively.
Recent work Wei et al. (2022) has shown that adding human written rationales to the demonstrated examples, called as chain-of-thought (CoT), can elicit the LLMs' reasoning ability, thus gaining improvements on reasoning tasks. In this paper, we find that GPT-3.5 2 is a good reasoner who can automatically generate reasonable explanations for demonstrated examples. In the following, we will introduce how to generate explanations with GPT-3.5, and then create few-shot CoT prompts with the generated explanations.
Footnote 2: We resort to ChatGPT, the latest version of GPT-3.5, to generate explanations.
**Generating Explanations with GPT-3.5.** In this step, we simulate the way humans explain problems to induce GPT-3.5 to explain the annotated examples. To be concrete, we present the task description, specific example, and the corresponding true labels to GPT-3.5, and then ask it to answer why the corresponding answer for that example is
the given label. By doing so, GPT-3.5 will generate reasonable explanations. For the user query and keyword relevance assessment task, we show how to use GPT-3.5 to explain why the label between the user query "**google data studio sharepoint**" and the keyword "**sharepoint migration tool file share**" is "**Bad**" in Table 1. Please refer to Table 8 and Table 7 for how to generate explanations for the demonstrated examples of BoolQ and WiC.
Creating Few-shot CoT Prompts.After getting the explanations generated by GPT-3.5, we can construct the few-shot CoT prompt. We show the few-shot CoT prompts for GPT-3.5 on the user query and keyword relevance assessment, WiC and BoolQ tasks in Tables 16, 17 and 18, respectively.
## 3 Experiment
### Tasks and Datasets
We evaluate AnnoLLM on three different tasks, including the user query and keyword relevance assessment, BoolQ and WiC. The basic statistics of these datasets are shown in Table 2.
The QK (user query and keyword relevance assessment) task aims to judge whether the user input query is related to the given keywords.
BoolQ, which stands for Boolean Questions and was introduced by Clark et al. (2019), is a question-answering task. In this task, each example comprises a brief passage and a yes/no question related to the passage. The users of the Google search engine anonymously and without solicitation submit the questions, which are then matched with a paragraph from a Wikipedia article that provides the answer.
The WiC (Word-in-Context) task, developed by Pilehvar and Camacho-Collados (2019), involves disambiguating word senses through binary classification of sentence pairs. In this task, two text snippets are provided along with a polysemous word that occurs in both sentences. The objective is to determine whether the target word shares the same sense in both sentences.
Since all three tasks are binary classification tasks, accuracy is used to evaluate their results.
### Human Performances
To evaluate the human performance on the QK dataset, we use UHRS3, a crowdsourcing platform, to annotate this data. Before annotation, we first present the annotators with the task description, category definitions, and annotated examples. Then, we invite three annotators to label a data instance. If the annotated results of the three workers are consistent, then this result will be considered as the annotated label. Otherwise, we would invite other annotators to continue annotating this data instance until three people have consistent annotation results. We require the crowdsourced annotators to annotate all development and test sets.
Footnote 3: [https://prod.uhrs.playmsn.com/uhrs/](https://prod.uhrs.playmsn.com/uhrs/)
BoolQ and WiC are two of the most challenging datasets in superGLUE (Wang et al., 2019). In the case of BoolQ, three authors labeled 110 randomly chosen examples, with human performance achieving 89%. Regarding WiC, Pilehvar and Camacho-Collados (2019) selected four groups of 100 instances from the test set, assigning each
Figure 1: On the left is the annotation process used by crowdsourced workers, while on the right is AnnoGPT’s process. AnnoGPT mimics the manual annotation process, with the exception that it generates explanations for each example before annotation. This ensures that each demonstrated example is accompanied by helpful explanations, making the annotation guidelines more informative and useful.”
group to an annotator, and achieving a human performance of 80%.
### Experimental Results
User Query and Keyword Relevance Assessment.Table 3 shows our experimental results on the QK development and test sets. Surprisingly, GPT-3.5 performs worse under the few-shot setting than under the zero-shot setting on this task. Fu and Khot (2022) speculate that the instruction tuning on GPT-3.5 may decrease the model's in-context learning ability but increase the model's zero-shot ability.
On the other hand, GPT-3.5 with a 4-shot CoT prompt outperforms its counterparts under the zero-shot and few-shot settings by around 6 and 8 points, respectively. Impressively, it even surpasses the crowdsourced annotators. The experimental results presented herein provide compelling evidence of the effectiveness of our proposed method, AnnoLLM. Previous studies Wei et al. (2022); Wang et al. (2022) have shown that the few-shot CoT, constructed using human-written explanations, can enhance the model's reasoning ability on reasoning tasks. However, our approach differs from previous methods in that we utilize explanations generated by the large model itself, thereby allowing the model's reasoning ability to emerge. Additionally, we have demonstrated, for the first time, the effectiveness of the CoT method on tasks other than typical reasoning tasks.
\begin{table}
\begin{tabular}{p{42.7pt}|p{113.8pt} p{113.8pt}} \hline \hline & Directions: Given a search engine query: “**google data studio sharepoint**”, first, consider what the user could have in mind when they type in the query and allow for misspellings or other ambiguity, then classify the relevance of keyword: “**sharepoint migration tool file share**” to the query into one of the following categories: “Not bad”, “Bad”. The definitions of the categories are “Not bad”. “The keyword is relevant to the user’s search query. This can include: broader or narrower product selection, competitor or alternative products, accessories, products often purchased together and related topics as well as direct matches to the user’s search. \\
**Input** & **“Bad”: the keyword is not relevant to the user’s search query. There is no relationship between the query and keyword. This includes but is not limited to: incorrect/unrelated product, unrelated topic, wrong location when location is important, cannot be used in place of query product nor are they commonly purchased together, etc.** \\
**Output 1** & **“Bad”: the keyword is relevant to the user’s search query. This can include: broader or narrower product selection, competitor or alternative products, accessories, products often purchased together and related topics as well as direct matches to the user’s search.** \\ & **“Bad”: the keyword is not relevant to the user’s search query. There is no relationship between the query and keyword. This includes but is not limited to: incorrect/unrelated product, unrelated topic, wrong location when location is important, cannot be used in place of query product nor are they commonly purchased together, etc.** \\ & **“Bad”: the keyword is not relevant to the user’s search query because it deals with SharePoint migration and file sharing, whereas the user’s query is focused on Google Data Studio and SharePoint. There is no relationship between these two topics, and the keyword cannot be used in place of the query product.** \\ & **“Bad”: the keyword “sharepoint migration tool file share” is not directly related to the query “google data studio sharepoint”. The keyword refers to tools and services for migrating data from SharePoint to another file share platform, whereas the query is about using Google Data Studio to connect with SharePoint data. There is no clear relationship between the two topics.** \\
**Output 4** & **“Bad”: the relevance of the keyword “sharepoint migration tool file share” to the search query “google data studio sharepoint” is “Bad”. The keyword is about a tool for migrating data from SharePoint, which is not directly related to Google Data Studio. Google Data Studio is a data visualization and reporting tool, not a migration tool for SharePoint.** \\
**Output 5** & **“Bad”: the relevance is “Bad” as the keyword “sharepoint migration tool file share” has no direct relation to the user’s search query “google data studio sharepoint”. The keyword contains to a tool used for migrating data within a SharePoint platform, while the query is focused on Google Data Studio and SharePoint. These two concepts are not directly related and therefore, the keyword is not relevant to the user’s search query.** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Illustration of how to use a LLM to generate explanations for the user query and keyword relevance assessment task. “Input” and “Output” refer to the prompt provided to the LLM and the generated explanations, respectively.
\begin{table}
\begin{tabular}{p{42.7pt}|p{113.8pt} p{113.8pt}} \hline \hline Task / Partition & Dev & Test \\ \hline QK & 350 & 1000 \\ BoolQ & 3270 & 3245 \\ WiC & 638 & 1400 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The basic statistics of the QK, BoolQ and WiC datasets.
Word Sense Disambiguation.Table 4 presents our experimental results on the WiC development and test sets, from which we also see that AnnoLLM, i.e., GPT-3.5 + 8-shot CoT, outperforms its few-shot counterpart significantly. Nevertheless, there remains a considerable disparity between AnnoLLM and crowdsourced annotators. This is primarily due to the task's inherent complexity, and currently, even the supervised SOTA models still exhibit a substantial gap compared to manual annotation.
Question Answering.As shown in Table 5, AnnoLLM surpasses crowdsourced annotators on the BoolQ development and test sets, but does not show significant improvement compared to the few-shot method. However, this does not imply that CoT with generated explanation is not useful for this task. In Section 3.5, we found that AnnoLLM with CoT exhibits better stability across various prompts, while its counterpart with the few-shot setting is highly sensitive to templates.
### Ablation Study
In this section, we perform an ablation study to compare the effect of different methods used to generate explanations on AnnoLLM's performance.
Firstly, we want to investigate whether the use of ground truth labels is beneficial for generating explanations for demonstrated examples. To answer this question, we induce LLM to generate explanations using prompts that contain ground truth labels, and prompts that do not contain labels (we only replace the last sentence of the original prompt "_Briefly explain why the relevance is "Bad", with a response length not exceeding 100 words."_ with "_Briefly explain the relevance between the keyword and query, with a response length not exceeding 100 words."_). From Table 6, we found that noting true labels when generating explanations will impair the performance of AnnoLLM by about 3 points on the QK test set (row 4 vs. row 1). This is because the model may give explanations for an incorrect answer without the guidance of ground labels. Please refer to Table 1 and Table 9 for the specific prompts and generated explanations.
Based on the method in the fourth row, we filter out the incorrect explanations using true labels and retain three correct explanations for three out of four demonstrated samples. However, for the third sample, all five generated explanations are wrong, and we have to keep three incorrect explanations for this demonstrated example. That is why using the golden label to filter explanations does not bring significant improvement (row 5 vs. row 4).
As shown in Table 1, we found that LLM reveals the true answer at the beginning of the generated explanation, and then provides an explanation for it. This is different from previous work Wei et al. (2022); Wang et al. (2022), which induces the model to provide an explanation first and then output the answer. Therefore, we remove the beginning sentence containing the label from the generated explanations (the underlined text in Table 16). However, this does not lead to improvement (row 2 vs. row 1). We speculate that this may be due to the difference in the nature of our task and traditional reasoning tasks. We remove the last sentence containing the answer to the demonstrated examples
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Models** & **Dev Set** & **Test Set** \\ \hline Crowdsourced Annotator & 80.0 & 80.0 \\ \hline
**Zero/Few-shot** & & \\ PaLM 540B + zero-shot & 59.1\({}^{\ddagger}\) & - \\ PaLM 540B + 5-shot & 64.6\({}^{\ddagger}\) & - \\ GPT-3.5 + zero-shot & 57.52 & 59.79 \\ GPT-3.5 + 8-shot & 67.71 & 66.36 \\ GPT-3.5 + 8-shot CoT & **71.47\({}^{\star}\)** & **69.17\({}^{\star}\)** \\ \hline
**Fine-tune** & & \\ T5 11B Raffel et al. (2020) & 77.3\({}^{\ddagger}\) & 76.9\({}^{\dagger}\) \\ PaLM 540B & 78.8\({}^{\ddagger}\) & 77.4\({}^{\dagger}\) \\ ST-MoE 32B Zoph et al. (2022) & **81.0\({}^{\ddagger}\)** & **77.7\({}^{\dagger}\)** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Evaluation results on the WiC task. Accuracy is used as the evaluation metric. Results marked with \(\dagger\) and \(\ddagger\) are from the official SuperGLUE leaderboard4 and PaLM Chowdhery et al. (2022), respectively. Results marked with an asterisk (*) represent the average result of five few-shot CoT prompts constructed with different generated explanations. The number behind the model represents the size of the model’s parameters.
\begin{table}
\begin{tabular}{l|c c} \hline \hline
**Models** & **Dev Set** & **Test Set** \\ \hline Crowdsourced Annotator & 65.58 & 71.5 \\ \hline GPT-3.5 + zero-shot & 67.71 & 70.00 \\ GPT-3.5 + 8-shot & 65.71 & 67.80 \\ GPT-3.5 + 4-shot CoT & **74.17\({}^{\star}\)** & **75.60\({}^{\star}\)** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Evaluation results on the QK task. Accuracy is used as the evaluation metric. Results marked with an asterisk (*) represent the average result of five few-shot CoT prompts constructed with different generated explanations.
(the Italicized text in Table 16), yet it does not have too much impact on the performance (row 3 vs. row 1). That is because the generated explanations contains the gold answer. To be in line with the prompt format of previous work Wei et al. (2022), we still append the ground truth label to the generated explanation.
### More Analysis and Discussion
Consistency Analysis of Generated Explanations.In the ablation study, we found that the performance of AnnoLLM is highly dependent on the generated explanations. This leads to a natural inquiry: Are the explanations produced by the LLM consistent enough for the same demonstrated sample? To answer this question, we generate five explanations for each demonstrated sample, and obtain five different few-shot CoT prompts. The results obtained using different few-shot CoT prompts are presented in Figure 2 (a). It can be observed that different few-shot CoT prompts achieve similar performance in the QK, WiC, and BoolQ tasks. This indicates that the quality of the explanations generated by the LLM is sufficiently consistent.
Stability Analysis of Generated Explanations.From Figure 2 (a), we can see that AnnoLLM with few-shot CoT prompts significantly outperforms its counterpart with the few-shot setting, with a high margin of 5 percentage points on the QK and WiC datasets in most cases. However, the improvement is quite modest on BoolQ, where it is mostly less than 0.5. This does not mean that AnnoLLM with few-shot CoT prompts has no effect on the BoolQ task.
To further analyze this, we make slight modifications to the existing prompts for BoolQ to obtain three corresponding few-shot CoT and few-shot prompts (Please refer to Appendix E for the few-shot and few-shot CoT prompts). The experimental results in Figure 2 show that the few-shot method is very sensitive to the templates, since even with slight modifications to the templates, the experimental performances will drop from around 89 to below 80 points. In comparison, AnnoLLM with few-shot CoT prompts suffer less performance loss, and we found that in these cases, AnnoLLM with few-shot CoT prompts outperforms its counterpart with few-shot templates by around 4 points. Therefore, we conclude that few-shot settings are more picky about templates, but few-shot CoT exhibits better stability across different templates.
## 4 Related Work
### Large-scale Pre-trained Language Models
GPT (Generative Pre-trained Transformer) is a family of language models developed by OpenAI, designed to generate human-like natural language text. The GPT models are based on the Transformer architecture Vaswani et al. (2017), which are pre-trained on an enormous corpus of text data with the standard language modeling objective to predict the next token based on the previous context. OpenAI has continuously increased the parameters and training data of its models, and has released GPT Radford (2018), GPT-2 Radford et al. (2019), and GPT-3 Brown et al. (2020) from 2018 to 2020. GPT-3 is a powerful language model with 175 billion parameters, and make a significant advance in the field of NLP. One of the unique features of GPT-3 is its ability to perform in-context learning, where one can apply it to various tasks by simply providing few-shot demonstrations without any gradient updates or fine-tuning. Furthermore, OpenAI fine-tuned GPT-3 on the code data or instruction data, and released Codex Chen et al. (2021) and InstructGPT Ouyang et al. (2021), respectively. Recently, OpenAI released the GPT-3.5 series models, in
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Models** & **Dev Set** & **Test Set** \\ \hline Crowdsourced Annotator & 89.0 & 89.0 \\ \hline
**Zero/Few-shot** & & \\ GPT-3 175B + zero-shot & 60.5 & - \\ Gopher 280B + zero-shot Rae & 79.3 & - \\ et al., 2021 & 83.7 & - \\ Chinchilla 70B + zero-shot & 83.7 & - \\ Hoffman et al., 2022 & 84.8 & - \\ PaLM 540B + zero-shot & 88.0 & - \\ LLaMA 65B + zero-shot & 85.3 & - \\ Touvron et al., 2023 & 84.28 & 84.30 \\ GPT-3.5 + 8-shot & 89.17 & 89.10 \\ GPT-3.5 + 8-shot CoT & **89.69** & **89.20** \\ \hline
**Fine-tune** & & \\ T5 11B Raffel et al. (2020) & 90.8\({}^{\dagger}\) & 91.2\({}^{\dagger}\) \\ PaLM 540B & 92.2\({}^{\ddagger}\) & 91.9\({}^{\dagger}\) \\ ST-MoE 32B Zoph et al. (2022) & **93.1\({}^{\ddagger}\)** & **92.4\({}^{\dagger}\)** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Evaluation results on the BoolQ task. Accuracy is used as the evaluation metric. Results marked with \(\dagger\) and \(\ddagger\) are from the official SuperGLUE leaderboard and PaLM, respectively. The number behind the model represents the size of the model’s parameters.
cluding text-davinci-003 and ChatGPT, by training on text and code data, then tuning with supervised instructions and reinforcement learning with human feedback. Some recent work has shown that GPT-3.5 models have strong few-shot and zero-shot learning ability on various NLP tasks, such machine translation (Jiao et al., 2023), and information extraction (Wei et al., 2023).
In this paper, we first propose that we can readily change GPT-3.5 to a good data annotator for a specific task by providing the detailed annotation instructions similar to human annotators.
### Pseudo Annotated Data
Creating human-annotated data is tedious and costly, particularly for complex tasks or specialized domains where there may not be sufficient data available. Therefore, creating pseudo-annotated data has been widely used to generate labeled data for a specific task when there is a limited amount of annotated data available. Back-translation involves translating a target language sentence back into the source language, which is first proposed to improve neural machine translation models with synthetic parallel data (Sennrich et al., 2016). Beyond machine translation, the technique of back-translation has been applied to unsupervised text style transfer (Prabhumoye et al., 2018) and image style transfer (Zhu et al., 2017). In addition, rule-based methods are also widely used to construct synthetic data. For example, Zhang et al. (2020) resort to the lead bias to create paired data to pre-train the text summarization model, PEGASUS. Lee et al. (2019) pre-trained the retriever with the Inverse Cloze Task, which aims to predict the context based on the given sentence.
However, these methods are designed for specific tasks and it is difficult to generalize them to other tasks. In this paper, we study how to transform the powerful language model GPT-3.5 into a general-purpose data annotator. By providing the corresponding task description and few-shot chain-of-thought demonstrations, it can be easily used to
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline \multicolumn{5}{c|}{**Variants of GPT-3.5 + 4-shot CoT**} & \multicolumn{2}{c}{**Datasets**} \\ \hline \# & **Generate E with L** & **Delete L from E** & **Filter E with L** & **Append L to L** & **Dev Set** & **Test Set** \\ \hline
1 & ✓ & & & ✓ & **74.17** & **75.60** \\
2 & ✓ & ✓ & & ✓ & 72.97 & 74.76 \\
3 & ✓ & & & & 74.09 & 75.44 \\
4 & & & & ✓ & 72.63 & 72.84 \\
5 & & & ✓ & ✓ & 73.05\({}^{\dagger}\) & 73.20\({}^{\dagger}\) \\ \hline \hline \end{tabular}
\end{table}
Table 6: Ablation study on the QK task. ‘E’ and ‘L’ refer to the generated explanations and ground truth labels, respectively. Results marked with a dagger (\(\dagger\)) represent the average outcome of three few-shot CoT prompts, each constructed with different generated explanations, while the remaining results represent the average of five.
Figure 2: Subfigure (a) shows the performance on development sets for different few-shot CoT prompts, created with different explanations. Subfigure (b) shows the performance for different few-shot and few-shot CoT prompts on the development set of BoolQ.
annotate data for various tasks.
## 5 Conclusion
In this paper, we introduce AnnoLLM, a novel annotation system powered by LLMs that has the potential to replace traditional crowdsourced annotators. Additionally, we propose a two-step approach, 'explain-then-annotate', to enhance the data annotation capabilities of LLMs. The approach leverages LLMs to generate a few-shot chain-of-thought prompt, which is then used to annotate unlabeled data. Our experimental results on three datasets demonstrate the feasibility of using LLMs to substitute crowdsourced annotators, which highlights the potential to facilitate the development of using LLMs like GPT-3.5 to annotate data for various NLP tasks.
| 自然言語処理(NLP)の多くのタスクは、高性能な機械学習モデルをトレーニングするためにラベル付きデータに依存します。しかし、データアノテーションは、特に大量のデータや専門的な領域を含むタスクでは、時間と費用がかかります。近年、GPT-3.5シリーズモデルは、さまざまなNLPタスクにおいて驚くほど少ないショットとゼロショット能力を有しています。この論文では、まず、大規模言語モデル(LLM) such as GPT-3.5が、適切なガイダンスと示範例を提供された場合、優れた crowdsourced annotatorとなる可能性を主張します。そのため、AnnoLLMというアノテーションシステムを提案しました。AnnoLLMは、説明してからアノテーションするという2段階アプローチを採用しています。具体的には、まずLLMに特定の ground truth 回答/ラベルが与えられた例に対する説明を求めます。そして、自己生成した説明とそれを用いたFew-shot chain-of-thought |
2306.02027 | Evolving Knowledge Mining for Class Incremental Segmentation | Class Incremental Semantic Segmentation (CISS) has been a trend recently due
to its great significance in real-world applications. Although the existing
CISS methods demonstrate remarkable performance, they either leverage the
high-level knowledge (feature) only while neglecting the rich and diverse
knowledge in the low-level features, leading to poor old knowledge preservation
and weak new knowledge exploration; or use multi-level features for knowledge
distillation by retraining a heavy backbone, which is computationally
intensive. In this paper, we for the first time investigate the efficient
multi-grained knowledge reuse for CISS, and propose a novel method, Evolving
kNowleDge minING (ENDING), employing a frozen backbone. ENDING incorporates two
key modules: evolving fusion and semantic enhancement, for dynamic and
comprehensive exploration of multi-grained knowledge. Evolving fusion is
tailored to extract knowledge from individual low-level feature using a
personalized lightweight network, which is generated from a meta-net, taking
the high-level feature as input. This design enables the evolution of knowledge
mining and fusing when applied to incremental new classes. In contrast,
semantic enhancement is specifically crafted to aggregate prototype-based
semantics from multi-level features, contributing to an enhanced
representation. We evaluate our method on two widely used benchmarks and
consistently demonstrate new state-of-the-art performance. The code is
available at https://github.com/zhiheLu/ENDING_ISS. | Zhihe Lu, Shuicheng Yan, Xinchao Wang | 2023-06-03T07:03:15 | http://arxiv.org/abs/2306.02027v2 | # Efficient Multi-Grained Knowledge Reuse for Class Incremental Segmentation
###### Abstract
Class Incremental Semantic Segmentation (CISS) has been a trend recently due to its great significance in real-world applications. Although the existing CISS methods demonstrate remarkable performance, they either leverage the high-level knowledge (feature) only while neglecting the rich and diverse knowledge in the low-level features, leading to poor old knowledge preservation and weak new knowledge exploration; or use multi-level features for knowledge distillation by retraining a heavy backbone, which is computationally intensive. In this paper, we for the first time propose to efficiently reuse the multi-grained knowledge for CISS by fusing multi-level features with the frozen backbone and show a simple aggregation of varying-level features, _i.e._, naive feature pyramid, can boost the performance significantly. We further introduce a novel densely-interactive feature pyramid (DEFY) module that enhances the fusion of high- and low-level features by enabling their dense interaction. Specifically, DEFY establishes a per-pixel relationship between pairs of feature maps, allowing for multi-pair outputs to be aggregated. This results in improved semantic segmentation by leveraging the complementary information from multi-level features. We show that DEFY can be effortlessly integrated into three representative methods for performance enhancement. Our method yields a new state-of-the-art performance when combined with the current SOTA by notably averaged mIoU gains on two widely used benchmarks, _i.e._, 2.5% on PASCAL VOC 2012 and 2.3% on ADE20K.
Class incremental segmentation, Knowledge distillation, Feature pyramid, Dense interaction.
## I Introduction
Deep models have achieved great success on semantic segmentation [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] in a close-set fashion since 2015 [1], but those models inevitably faced catastrophic forgetting [12, 13, 14] - a problem that was first proposed in 1989, _i.e._, a model tends to forget the old knowledge when required to learn the new in sequential learning. Solving this problem is crucial and practical, _e.g._, a self-driving system is always needed to recognize new environments or traffic signs without forgetting the old concepts. Incremental learning [15, 16, 17, 18, 19, 20, 21, 22] provides a solution for this problem by learning a model in a continuous data stream, in which the learned model needs to perform well on both old and new scenarios.
Incremental learning has been introduced to semantic segmentation since 2019 [24], but the historical works were specialized to classification problem, which are not suitable for class incremental semantic segmentation (CISS) due to its unique issue - _background shift_[25]. That is, the background in step \(t\) may contain not only current classes but also the past and future ones. Regarding _background shift_, two effective techniques have been proposed: old knowledge based pseudo labeling [23, 26, 27] and unknown class modeling [23, 27]. Specifically, the pseudo labeling combines the pseudo labels from the model of step \(t-1\) and the ground-truth labels at step \(t\) to produce the updated labels containing both old and current classes, but it neglects the future classes that may exist in the background. This limitation can be tackled by unknown class modeling, which often utilizes a class-agnostic model [28, 29] to detect all objects in one image. These objects are then be used to model future classes. However, those methods encounter two limitations: (i) only the high-level knowledge (_the output of the feature extraction backbone_) is leveraged to
Fig. 1: Illustration of the difference between our method (**c**) and previous ones: **(a)** and **(b)**. Obviously, our method is more advantageous by efficiently leveraging the multi-grained knowledge in varying-level features without retraining the heavy backbone.
perform pseudo labeling and unknown class modeling [23, 27] while the low-level features that also contain abundant prior knowledge as well as more detailed semantics [3] are not used as shown in Figure 1 (a), weakening its capability to preserve old knowledge and learn new classes (see the first two rows in Figure 2); and (ii) a heavy backbone, _e.g._, ResNet-101, is needed to retrain in the incremental step [26, 30] when reusing multi-level features for knowledge distillation. Facing above limitations, an intuitive question might be raised: _can we efficiently reuse the multi-grained knowledge in diverse features without retraining a heavy backbone for CISS?_
In this paper, we for the first time investigate the efficient multi-grained knowledge reuse for CISS by fusing multi-level features without retraining the heavy backbone. The rationale is that the fusion enables a better utilization of the prior knowledge to serve incremental learning tasks, _i.e._, better old knowledge preservation and new knowledge exploration as shown in Figure 2. Specifically, we first propose a simple multi-level feature aggregation module called naive feature pyramid and apply it to a state-of-the-art method - MicroSeg [23]. The experimental results in Table I show that the fusion of high- and low-level features from any layer can boost the performance while aggregating all layers yields a new SOTA performance. This is reasonable as the low-level features (i) contain the well-learned prior knowledge; and (ii) offer more evident and detailed cues for objects/regions [3] with larger spatial dimensions, thereby producing better performance via feature fusion.
Furthermore, we advance the naive feature pyramid by proposing a novel **DE**nsely-interactive **F**eature **p**Y**ramid (DEFY) module (see Figure 3) for a better multi-level feature fusion. In particular, DEFY improves the fusion of high- and low-level features by allowing their dense interaction. Concretely, it establishes a per-pixel relationship between pairs of feature maps and then aggregates multi-pair outputs into an updated feature map. The per-pixel relationship between feature maps is established using an attention mechanism, where each pixel's feature is weighted based on its similarity to the features of the neighboring pixels. This densely-interactive fusion enhances the complementary information provided by multi-level features, leading to more accurate semantic segmentation results.
We summarize our contributions as follows:
* To our knowledge, this is the first study to investigate reusing multi-grained knowledge efficiently by leveraging multi-level features without the need to retrain a heavy backbone for CISS. We empirically demonstrate that the utilization of multi-grained knowledge is essential for CISS and a simple aggregation of multi-level features using the naive feature pyramid can significantly improve the baseline's performance (**1.4%** mIoU gain).
* We further propose a novel densely-interactive feature pyramid (DEFY) module that enhances the fusion of multi-level features by enabling the establishment of per-pixel relationships and allowing for their dense interaction. We also discuss the differences between DEFY and other hierarchical feature aggregation approaches in Sec II-D.
* We show that the proposed DEFY can be easily integrated into three representative CISS models as a plug-in to enhance their performance. The scalability and ease of integration of DEFY make it a promising tool for advancing other fields of semantic segmentation.
* We conduct extensive experiments to evaluate the performance of the proposed DEFY, and the results demonstrate that it achieves a new state-of-the-art performance when integrated with the current SOTA CISS method.
Fig. 2: Illustrating the significance of using multi-grained knowledge for CISS. The top two rows are the results using the high-level knowledge (the feature \(f_{4}\)) only while the bottom two show the results with multi-grained knowledge (four-level features \(f_{1},f_{2},f_{3},f_{4}\)) being used. \(f_{i}\): the feature extracted from the \(i^{th}\) block of ResNet-101. With the high-level knowledge only, the model tends to not only forget the old knowledge learned in the first step, _e.g._, the “cow” is misclassified as “sheper” in the \(1^{st}\) row, but also lose the ability to learn new classes, _e.g._, the “TV” is totally missed (\(2^{nd}\) row). In contrast, using multi-grained knowledge (bottom two) can enhance both old knowledge preservation and new knowledge exploration for CISS. Baseline: MicroSeg [23]. Set-up: **15-5 (2 steps)** on PASCAL VOC 2012.
## II Related Work
### _Semantic Segmentation_
Semantic segmentation [1, 2, 33, 1, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13] is a form of pixel-level prediction task that clusters parts of an image together which belong to the same object class. The introduction of Fully Convolutional Networks (FCN) [1] in 2015 marked a significant breakthrough in the field of semantic segmentation, and since then, numerous follow-up works have been proposed to improve the performance by introducing novel architectures, _e.g._, Dilated Convolutions [34], Pyramid Pooling Module [2], U-Net [35], Atrous Spatial Pyramid Pooling (ASPP) [4, 5], and HRNet [33]. Generally, those methods all do per-pixel classification with a regular multi-class cross-entropy loss. In contrast, recent works [29, 36] reformulated semantic segmentation as a mask classification task, _i.e._, predicting a set of binary masks, each of which is associated with a global class label. Despite the high performance in a close-set fashion, all mentioned methods suffer from _catastrophic forgetting_[12, 13, 14] when learning on new classes sequentially. This problem has raised the interest of researchers to develop semantic segmentation models that can learn incrementally. In this paper, we also follow this line to solve the incremental semantic segmentation task.
### _Class Incremental Learning_
Class incremental learning (CIL) aims to learn a classification model with the number of classes increasing sequentially. Existing methods can be mainly grouped into three categories: parameter isolation [17, 18, 37, 38], replay-based [39, 40, 41, 42, 43, 44] and regularization-based methods [45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60]. Specifically, parameter isolation based approaches often assign independent parameters for each incremental task, but face an increasing amount of parameters with the number of tasks. Replay-based methods tend to either build a memory bank that stores a few samples of old classes [39, 40, 41, 42] or learn a network to generate samples from old classes [43]. Those samples are further used to jointly train with new classes for preventing old knowledge forgetting. In contrast, regularization-based methods mainly focus on transferring old knowledge learned on old classes to the new model by knowledge distillation [16] or designing novel loss functions to prevent historical knowledge forgetting [56, 57, 58, 59]. Note that the mentioned techniques are not mutually exclusive but complementary. In this paper, we propose an incremental learning method for semantic segmentation instead of classification.
### _Class Incremental Semantic Segmentation_
Class incremental semantic segmentation (CISS) is a similar task to CIL but aims to learn a segmentation model incrementally. Due to the similarity, the techniques in CIL can be leveraged by CISS, _e.g._, knowledge distillation is used in ILT [24], MiB [25], SDR [61], PLOP [26], RCN [30], and DKD [47]; replay-based techniques are adopted by RECALL [62], SSUL-M [27], MicroSeg-M [23], and ALIFE [44]. However, CISS has its specific problem - background shift [25], _i.e._, the background in the \(t^{th}\) step may contain old or future classes, which cannot be solved by CIL methods. To that end, recent works have proposed two useful techniques: old knowledge based pseudo labeling [23, 26, 27] and unknown class modeling [23, 27]. This particular pseudo labeling complements the ground truth labels in the \(t^{th}\) step with pseudo labels of old classes predicted by the model of \((t-1)^{th}\) step while unknown class modeling tends to utilize a pre-trained class-agnostic model for possible objects detection and future class modeling. Despite the effectiveness, those methods exist some limitations: (i) the old knowledge in low-level features with much subtler details are overlooked when performing pseudo labeling and unknown class modeling [23, 27]; (ii) a heavy backbone is required to retrain in the incremental step [26, 30] when using multi-level features for knowledge distillation. In this paper, we propose to efficiently leverage the multi-grained knowledge in various features without retraining the parameters of backbone.
### _Hierarchical Feature Aggregation_
Hierarchical feature aggregation (HFA) [3, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73] is a widely used technique that involves leveraging the information present at different levels or scales within a deep model to learn more expressive representations, which can then be applied to various downstream tasks. Here, we mainly review different types of HFA in semantic segmentation [3, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78] and differentiate our method from the previous approaches. From the technical perspective, existing HFA approaches can be roughly divided into two categories, _i.e._, multi-level feature aggregation (MFA) [3, 63, 65, 66, 67, 68, 69, 72, 73] and contextual feature aggregation (CFA) [74, 75, 76, 77, 78]. Specifically, MFA based approaches typically employ a feature pyramid construction strategy, where feature maps at different scales are utilized. These maps are processed through pyramidal pooling [3, 4] or dilated/atrous convolutions [5] to facilitate the aggregation process. The aggregation is often achieved by either gradually summing the multi-level feature maps [79, 83] or concatenating multi-level feature maps along the channel dimension [2, 4]. Our naive feature pyramid used to examine the significance of leveraging multi-grained knowledge for CISS also follows this line of approach. In contrast, CFA based approaches offer a solution to model patch-wise relationships in a given image through using attention mechanisms, _e.g._, Transformer-based architectures [74, 75, 76, 77, 84, 85].
However, MFA based approaches overlook the interaction between pair-wise features by simply summing or concatenating multi-level features while CFA based approaches initially treat each patch in an image with equal attention without stressing the significant regions. In this paper, we focus on aggregating multi-level features for CISS and design a simple-yet-effective module that (i) explicitly builds the pair-wise and pixel-wise interaction between high- and low-level features; (ii) maintains the dominance of high-level knowledge while complementing it with diverse low-level knowledge by using high-level feature as the **Query** and low-level features as the **Key** and **Value** in an attention module; (iii) is a light-weight plug-and-play module that shows the effectiveness on three representative CISS methods.
## III Methodology
### _Problem Set-up_
Class incremental semantic segmentation (CISS) [23, 27] aims to learn a segmentation model incrementally by updating part/all of the parameters of the historical model and a set of new parameters specialized to the current task. Specifically, one CISS task often consists of \(T\) learning steps, each with a dataset \(D_{t}\) from \(C_{t}\) classes. Note that \(D_{t}\) may also contain historical classes from previous steps or future classes, but these classes are labeled as background, which results in a unique problem in CISS - _background shift_. Except background class, the given classes in the \(t^{th}\) step are disjoint with the classes from other steps, _i.e._, \(C_{t}\cap C_{i}=\emptyset\), where \(i\in\{1,\dots,T\}\setminus\{t\}\). In step \(t\), a model is learned on \(C_{t}\) classes only, the learned model tends to forget the historical knowledge acquired from old classes without accessing \(C_{0}\cup\dots\cup C_{t-1}\) classes, thus causing _catastrophic forgetting_ - the key problem in incremental learning. Overall, an incremental segmentation model is designed to tackle both _background shift_ and _catastrophic forgetting_, thereby achieving good pixel-wise recognition performance on the current classes as well as the classes from the past steps, _i.e._, \(C_{0}\cup\dots\cup C_{t}\).
### _Overview_
In general, a CISS model consists of a heavy backbone \(F_{b}\) and \(T\) light-weight segmentation heads \(\{H_{1},\dots,H_{T}\}\), in which the \(F_{b}\) and \(H_{1}\) are fully trained in the \(1^{st}\) step and then [\(F_{b}\) and \(H_{t}\)] or [only \(H_{t}\)] will be updated in step \(t\). When learning the model in step \(t\), it is common to borrow the old knowledge learned in step \(t-1\) by using the features of that model, _e.g._, four-level features extracted from four blocks of ResNet-101 [31] - the standard backbone used in existing methods. The weakness of current CISS methods is that they either mine the prior knowledge stored in the high-level features only [23, 27]; or explore more knowledge in multi-level features but retrain the heavy backbone \(F_{b}\)[26, 30]. To that end, this work focuses on a better utilization of prior knowledge by efficiently leveraging multi-level features for improved CISS via training \(H_{t}\) only. Specifically, we propose to introduce a feature aggregation module between \(F_{b}\) and \(\{H_{1},\dots,H_{T}\}\). After a one-time training in the \(1^{st}\) step, the module can effectively fuse multi-level features for subsequent incremental segmentation tasks. For the module design, we first propose a naive feature pyramid (NFP) module and apply it to the current SOTA method MicroSeg [23] to show a fusion of multi-level features can significantly boost its performance (refer to Table I). To advance the NFP, we further propose a novel densely-interactive feature pyramid (DEFY) module that does better feature aggregation by facilitating per-pixel relationship establishment and pair-wise interaction between features. Next, we will introduce each module in detail.
### _Naive Feature Pyramid Module_
Given the backbone - ResNet-101, we first extract four-level features \(\{f_{1},f_{2},f_{3},f_{4}\}\) from four blocks. Following the previous CISS methods [23], we forward \(f_{4}\) to the ASPP [4, 5] module \(\phi\) to obtain the updated feature \(\phi(f_{4})\). This feature is derived from the multi-level aggregation by one Conv layer, three pooling pyramids and one ASPP pooling layer, which already has good abstract and global representations. However, leveraging \(f_{4}\) only may lack fine-grained details and local information that are necessary for accurately segmenting objects. For this reason, we aggregate low-level features to
Fig. 3: The overview of applying our densely-interactive feature pyramid (DEFY) to the Mask2Former [29] based baseline method [23]. In the \(1^{st}\) step, the entire model including backbone, DEFY, and segmentation head \(H_{1}\) are trained on \(C_{0}\) classes. During training, DEFY builds pair-wise and pixel-wise interactions between high- and low-level features to capture and aggregate multi-grained knowledge. In the follow-up steps, only a new segmentation head \(H_{t}\) is trainable for \(C_{t}\) classes. \(C_{t}\): the classes shown in step t.
\(\phi(f_{4})\) as a complementing. Concretely, \(f_{1},f_{2},f_{3}\) are projected into three 1x1 Conv layers \(\theta_{1},\theta_{2},\theta_{3}\), separately, for the channel dimension reduction. Then, we concatenate all processed features as the new fused feature that serves as the input to the subsequent segmentation head. Since the processed features may have different spatial dimensions, we perform an up-sampling operation on some of them to ensure that they have the same spatial resolution as the others before concatenation. Overall, the process can be formulated as follows.
\[f^{\star}=Concat\{\theta_{1}(f_{1}),\theta_{2}(f_{2}),\theta_{3}(f_{3}),\phi(f_{ 4})\} \tag{1}\]
where \(f^{\star}\) is the new fused feature, \(\theta_{i}\) the 1x1 Conv projection layer, \(\phi\) the ASPP module. Note that we omit the up-sampling operation for clarity. The detailed dimensions of features and Conv layers are given in Sec. IV-B.
#### Iii-C1 Preserving the Domination of High-level Features
In principle, both high- and low-level features heritage the old knowledge from the model trained in the \(1^{st}\) step. Specifically, high-level features, _e.g._, \(f_{4}\) in ResNet-101, contain abstract and global representations of an image that capture semantic information, which are important for accurately recognizing object categories. However, they may not capture fine-grained details and local information that are necessary for accurate object segmentation, which is why incorporating low-level features can be beneficial. To balance the dominance of high-level features and incorporating the fine-grained information of low-level features, we reduce the dimension of low-level features and then merge them with high-level ones, which is also used in DeepLabV3+ [3]. Moreover, this is more computationally efficient.
#### Iii-C2 The Difference from DeepLabV3+ [3]
DeepLabV3+ [3] employs low- and high-level feature aggregation for semantic segmentation. This aggregation considers the feature of the first block and the output of ASPP only. Despite the similarity of structure, we propose the naive feature pyramid particularly for CISS that highlights the multi-grained knowledge reuse for incremental tasks. In addition, we conduct extensive experiments to analyze each combination and show combining multi-level features achieves the best performance.
### _Densely-interactive Feature Pyramid Module_
Building on the success of the naive feature pyramid module for CISS, we propose an advanced densely-interactive feature pyramid (DEFY) module (Figure 3) that leverages attention mechanisms to achieve more effective feature fusion. Our DEFY considers both pair-wise and pixel-wise interactions between each pair of high- and low-level features, allowing it to preserve the dominance of high-level knowledge while simultaneously enriching the abstract high-level information with detailed low-level features. By considering these interactions, our method generates more comprehensive representations, which are particularly advantageous for incremental learning tasks. Concretely, we first project the feature dimension of low-level features \(\{f_{1},f_{2},f_{3}\}\) to the same dimension as the output of ASPP \(\phi(f_{4})\) through three 1x1 Conv layers. Then, we enable the interaction of each pair of the low-level feature and \(\phi(f_{4})\) via the attention block, in which \(\phi(f_{4})\) is the \(\mathbf{Query}\) while the other serves as \(\mathbf{Key}\) and \(\mathbf{Value}\). The formulation is defined as follows.
\[f_{h}^{i}=[softmax(\frac{(\phi(f_{4})\omega_{q})(\psi_{i}(f_{i})\omega_{k})^ {T}}{\sqrt{d_{k}}})(\psi_{i}(f_{i})\omega_{v})]\omega_{o} \tag{2}\]
where \(\psi_{i}\) is the 1x1 Conv layer, \(i\in\{1,2,3\}\), \(\omega_{q,k,v}\) is the inprojection layer, \(\omega_{o}\) is the out-projection layer, and \(d_{k}\) is the inprojected dimension. Note that \(f_{i}\in\mathbb{R}^{b\times c\times h\times w}\) is reshaped to \(\mathbb{R}^{b\times hw\times c}\) as the input of the attention block, which is omitted for simplicity.
Regarding Eq.2, with \(\{f_{1},f_{2},f_{3}\}\), we can obtain \(\{f_{h}^{1},f_{h}^{2},f_{h}^{3}\}\). These features are then aggregated by summation as a new fused feature, which is shown below.
\[f^{\star}=Sum\{f_{h}^{1},f_{h}^{2},f_{h}^{3},\phi(f_{4})\} \tag{3}\]
#### Iii-C1 Why DEFY is better?
The strength of DEFY lies in its utilization of attention mechanisms, enabling the capture of crucial features for segmentation. More specifically, DEFY establishes per-pixel relationships between high- and low-level feature maps, and employs attention to assign weights to the contributions of different feature maps in the fused feature representation. This precise and effective combination of multi-level features results in enhanced segmentation performance, as the most relevant information is appropriately emphasized and utilized.
#### Iii-C2 Potential Variants of DEFY
We acknowledge that DEFY can be reformulated in various ways. Here, we further investigate two extra variants, _i.e._, DEFY based on concatenation DEFY\({}_{con}\) and dual densely-interactive feature pyramid (D-DEFY) module. Specifically, DEFY\({}_{con}\) replaces the final summation operation with concatenation, which is similar to the naive feature pyramid. In contrast, D-DEFY employs dual densely-interactive modules for each low- and high-level feature pair, where one module uses high-level feature as \(\mathbf{Query}\) and low-level feature as \(\mathbf{Key}\) and \(\mathbf{Value}\), and the other module uses the low-level feature as \(\mathbf{Query}\) and high-level feature as \(\mathbf{Key}\) and \(\mathbf{Value}\). Two outputs from dual densely-interactive modules are then fused by summation. Finally, we will combine the high-level feature \(\phi(f_{4})\) with three fused outputs (from three pairs of features) via a summation operation as a new aggregated feature. We experiment with these variants and analyze the results in Sec. IV-G0d.
## IV Experiments
### _Datasets_
PASCAL VOC 2012 [86]It is a widely used segmentation dataset, which can be repurposed for CISS. It contains 10,582/1,449 training/validation images from 21 classes including 20 object classes and one background class. We follow the previous works [23, 27, 30] to conduct experiments on multiple incremental tasks as shown in Table II. For example, in the **10-1 (11-steps)** task, a model is trained on 11 classes (including the background class) in the \(1^{st}\) step, and then sequentially trained on one new class from step 2 to step 11. Note that all experiments are under the more realistic _overlapped_ set-up as per [23, 26, 27, 30], _i.e._, the background of an image in step \(t\) may contain old or future
classes while only the new classes belonging to the \(t^{th}\) step are labeled, except for Sec. IV-G0e that evaluates our method in the _disjoint_ set-up, which assumes all future classes are known and _not_ shown in the background.
Ade20K [87]This is a more challenging semantic segmentation dataset with 150 classes for daily life scenes, which contains 20,210/2,000 training/validation images. The experimental protocol for ADE20K is in line with that of PASCAL VOC 2012.
### _Implementation Details_
Training and InferenceTo verify the effectiveness of the proposed DEFY, we apply it to three representative methods, _i.e._, MiB (CVPR2020) [25], SSUL (NeurIPS2021) [27] and MicroSeg (NeurIPS2022) [23]. Their official codes are open-source. For a fair comparison, we follow their implementations for training and inference and apply our light-weight plug-in DEFY to them. We also offer their implementation details below.
All three methods adopt DeepLabV3 [4] with a ResNet-101 backbone, which is pre-trained on ImageNet [88].
**For MiB [25]:** Following [5], they use the SGD optimizer with the same momentum and weight decay. The initial learning rate for the \(1^{st}\) step is 1e-2 and 1e-3 for the follow-up steps, scheduled by "poly" learning rate policy [4]. The model is trained with a batch size 24 for 30 epochs on PASCAL VOC 2012, and for 60 epochs on ADE20K during each learning step. The data augmentations used here are random scaling (from 0.5 to 2.0) and random flipping.
**For SSUL [27]:** The optimizer, learning rate schedule, data augmentation and the initial learning rate for the first step are same as [25], but the initial learning rate for the following steps is 1e-2 for SSUL. The batch size 32 and epochs 50 are used on PASCAL VOC 2012 while batch size 24 and epochs 60 on ADE20K. The detector of saliency map leveraged in SSUL is DSS [28] pre-trained on MSRA-B dataset [89].
**For MicroSeg [23]:** Except for the same optimizer, learning rate schedule and data augmentation as that in [23, 25], MicroSeg uses the initial learning rate 1e-2 and batch size 16 for PASCAL VOC 2012 while 5e-3 and 12 for ADE20K in all steps. The proposal generator is Mask2Former [29] pre-trained on MS-COCO [90], which produces 100 class-agnostic proposals for each image. It's worth noting that the proposal generator is not fine-tuned on any benchmark dataset for a fair comparison. The hyper-parameters for Micro mechanism are \(K=5\) for PASCAL VOC 2012 and \(K=1\) for ADE20K. When using replaying, the samples of extra memory is 100 for PASCAL VOC 2012.
Architecture DetailsFollowing existing works [23, 25, 26, 27], we use DeepLabV3 [4] with ResNet-101 backbone. Given the input image size 513\(\times\)513, the dimensions of four-level features from ResNet-101 are listed below.
\[f_{1} \in\mathbb{R}^{b\times 256\times 128\times 128},f_{2}\in \mathbb{R}^{b\times 512\times 64\times 64},\] \[f_{3} \in\mathbb{R}^{b\times 1024\times 32\times 32},f_{4}\in\mathbb{R}^{b \times 2048\times 32\times 32}.\]
The output of ASPP is \(\phi(f_{4})\in\mathbb{R}^{b\times 256\times 32\times 32}\).
In naive feature pyramid, the channel dimension of the three 1x1 Conv layers are \(\theta_{1}\in\mathbb{R}^{256\times 48}\), \(\theta_{2}\in\mathbb{R}^{512\times 48}\), \(\theta_{3}\in\mathbb{R}^{1024\times 48}\), respectively. The up-sampling operation will enlarge the spatial dimension of other features to the same dimension of \(f_{1}\), _i.e._, \(128\times 128\). After concatenation, the dimension of the final output is \(\mathbb{R}^{b\times 400\times 128\times 128}\).
For our DEFY, the channel dimensions of the three 1x1 Conv layers are \(\psi_{1}\in\mathbb{R}^{256\times 256}\), \(\psi_{2}\in\mathbb{R}^{512\times 256}\), \(\psi_{3}\in\mathbb{R}^{1024\times 256}\), respectively. Three in-projection layers \(\omega_{q,k,v}\) share the same dimension \(\mathbb{R}^{256\times 768}\) while the out-projection layer recovers the original input dimension with a linear projection (\(\mathbb{R}^{768\times 256}\)).
### _Compared Methods_
We compare all existing methods for a comprehensive evaluation. Specifically, three classical methods in CIL, _i.e._, EWC [52], Lwf-MC (TPAMI2017) [15], and ILT (ICCVW2019) [24], are repurposed for CISS and compared. Moreover, we make comparison with the existing state-of-the-art CISS methods including MiB (CVPR2020) [25], SDR (CVPR2021) [61], PLOP (CVPR2021) [26], RCIL (CVPR2022) [30], SSUL (NeurIPS2021) [27], DKD (NeurIPS2022) [47], ALIFE (NeurIPS2022) [44], MicroSeg (NeurIPS2022) [23], and EWF (CVPR2023) [91].
### _Evaluation Metric_
We evaluate our method using the mean Intersection-over-Union (mIoU) metric, which is computed as the average of the Intersection-over-Union (IoU) scores for each class. The IoU score for a class is computed as the true positives divided by the sum of true positives, false positives, and false negatives. In all tables, we reports three mIoU metrics: the mIoU of the classes in the first step, the mIoU of the classes in the follow-up steps and the mIoU of all classes. The first metric evaluates the model's ability to preserve old knowledge, while the second measures its ability to learn new classes. The final metric reflects the model's overall performance on both old and new classes.
### _Results on PASCAL VOC 2012_
Table II shows the results on PASCAL VOC 2012. Overall, our DEFY can always boost the performance of the baseline methods significantly except for two cases with comparable performance, and a new state-of-the-art performance is achieved when combined with the current SOTA MicroSeg. In addition, we also make several interesting observations. First, the proposed DEFY is particularly effective in improving the performance of baseline methods that have poor original performance. For example, it improves MiB by 44.1% on **10-1 (11 steps)**, 37.9% on **15-1 (6 steps)**, 3.8% on **5-3 (6 steps)** and 5.1% on **19-1 (2 steps)**. This is because MiB does not employ pseudo labeling or modeling of unknown classes, thereby fully leveraging the multi-grained old knowledge to enhance the power of feature representation is especially vital for it. Second, adding our DEFY to the baselines wins in the most cases on both mIoU (\(1^{st}\) step) and mIoU in the following steps,
_i.e._, 83.3% (15/18) and 66.7% (12/18), respectively, which indicates our DEFY improves both old knowledge preservation and new knowledge exploration.
Moreover, Figure 4 shows the qualitative comparison, which is consistent with the quantitative results. In general, our DEFY can significantly improve the quality of the baselines, except for MicroSeg where it mis-classifies a "sofa" with a similar shape to a train as a "train".
### _Results on ADE20K_
Table III shows the results on a more challenging dataset - ADE20K with more diverse classes (150 classes). Overall, our DEFY can improve the baseline methods with remarkable performance gains in most cases. In particular, we found that our DEFY can improve MicroSeg the most when trained on 100 classes in the \(1^{st}\) step, _e.g._, 4.8% on **100-5 (11 steps)**, 3.1% on **100-50 (2 steps)**, and 1.4% on **100-10 (6 steps)**. The reason is that the better pseudo labeling and unknown class modeling in MicroSeg requires a more powerful representation of feature, which is achieved by the dense interaction based multi-level feature fusion in our DEFY. Moreover, we can see that our DEFY can enhance the old knowledge preservation in 5 out of 9 cases and improve the new knowledge exploration in 8 out of 9 cases. This manifests that our DEFY can benefit more on novel class recognition where larger number of classes exist.
### _Further Analysis_
The Effect of ReplayingFollowing MicroSeg [23], we also verify the effectiveness of the proposed method when an extra memory bank (fixed size=100 [23, 27]) for replaying is used, as shown in Table IV. We found that our DEFY can benefit more with replaying, _e.g._, performance gains are 3.9% _vs._ 2.8% on **15-5 (2 steps)** and 7.5% _vs._ 4.6% on **15-1 (6 steps)** comparing to the baseline. This indicates that the old knowledge reuse via multi-level feature fusion can works better when using memory based replaying.
Step-wise mIoUFigure 5 shows the step-wise mIoU under **15-1 (6 steps)**, from which we can see that with the DEFY the baseline consistently achieves higher performance in each step, except for the case of _MicroSeg+Ours_, where the performance is comparable in terms of mIoU. From the figure,
we found that _MicroSeg+Ours_ performs better in the \(4^{th}\) step, but worse on step 5 (class "TV"). The possible reason is that _MicroSeg+Ours_ is prone to segment "TV" wrongly compared with MicroSeg.
We further show the averaged mIoU and its standard deviation (STD) over 6 steps in Figure 6. Interestingly, we found that our DEFY enables higher performance and lower STD when applied to MiB, SSUL and MicroSeg-M. This manifests the DEFY can enhance those baselines in terms of both performance and stability. Moreover, _MicroSeg+Ours_ demonstrates a similar averaged mIoU and STD though it performs worse in the final step (see Figure 5).
Fig. 4: Qualitative comparison with baseline methods on **15-1 (6 steps)** setting on PASCAL VOC 2012. Zoom in for details.
Fig. 5: Step-wise mIoU on PASCAL VOC 2012 under **15-1 (6 steps)** set-up.
Class-wise IoUWe show the class-wise IoU under all incremental set-ups on PASCAL VOC 2012 in Table V. Overall, our DEFY can benefit the baseline methods in most cases. The detailed analysis for each incremental set-up is given as follows.
**10-1 (11 steps):** We found _MicroSeg + Ours_ outperforms MicroSeg on 11 classes out of 21 classes, often by large margins. For example, in 21 classes, the improvement higher than 5% is 6. When using replaying, _MicroSeg-M + Ours_ wins in 15 classes, which indicates that our DEFY can benefit more with a small extra memory bank.
**2-2 (10 steps):** In this most challenging set-up where only 3 classes (with one background class) are used to train the entire model in the \(1^{st}\) step, our method shows the most significant improvements, _e.g._, 11.0% on MicroSeg and 8.5% on MicroSeg-M. In particular, \(MicroSeg+Ours\) surpasses the baseline in 17 out of 21 classes, often by large margins, while \(MicroSeg-M+Ours\) wins in 16 out of 21 classes. This demonstrates that the DEFY can effectively benefit a baseline method in more challenging incremental scenarios.
**15-1 (6 steps):** Our DEFY does not benefit the baseline method when adding on MicroSeg, but with comparable performance. Specifically, in 10 out of 21 classes, \(MicroSeg+Ours\) outperforms MicroSeg. We found the weaker performance is mainly due to the poor segmentation results on class "TV" where MicroSeg surpasses our method by 19.3%. This can be mitigated by replaying, which narrows the gap to 6.7%. Moreover, with replaying, our method outperforms the baseline MicroSeg-M. This manifests that our method can make better use of an extra memory bank.
**5-3 (6 steps):** This is the third most challenging set-up regarding the mIoU. Our DEFY enhances the performance of both the two baselines regardless of whether the replaying is employed or not. Interestingly, we found the replaying brings limited improvement on MicroSeg, _i.e._, 0.3% mIoU gain. In contrast, with our DEFY, the improvement becomes larger, _i.e._, 3.3% mIoU gain. This is encouraging as our DEFY can boost the capability of the baseline method to use an extra memory bank for replaying.
**19-1 (2 steps):** This is a relatively simpler set-up as most of the classes, _i.e._, 20 classes (including one background class), are used to train the model in step 1. We found MicroSeg/MicroSeg-M has already performed well in this set-up with the highest mIoU among all set-ups. Nevertheless, our DEFY can further boost the performance of the baseline methods with significant mIoU gains, _i.e._, 2.4% on MicroSeg and 1.5% on MicroSeg-M. It's worth noting that the performance of \(MicroSeg-M+Ours\) approaches the oracle's performance, with only 0.9% mIoU gap.
**15-5 (2 steps):** In this set-up, except for the better overall performance compared with two baselines, our method wins 11 out of 21 classes on MicroSeg while 14/21 on MicroSeg-M. Moreover, our method can also better use of the extra memory bank with 3.9% mIoU gain against 2.8% (baseline).
Variants of Feature Aggregation ModuleAs mentioned in Sec. III-D2, the feature aggregation can be formed in various ways. We show the ablation study in Table VI for comparison of several variants. A few observations can be made here. (i) All of the variant modules have a light-weight architecture, with the parameters occupying less than 3.2% of the original training parameters. It's worth noting that the variant modules only undergo one-time training in step 1, so this comparison is made based on step 1. (ii) Using concatenation to replace summation after the output of the attention module (as in DEFY\({}_{con}\)) can improve the baseline's performance, but it performs worse than DEFY with summation. (iii) Dual densely
interactive Feature Pyramid (D-DEFY) (refer to Sec. III-D2 for details) does not benefit DEFY more. The reason for the lower performance of D-DEFY compared to DEFY may be due to the addition of the output of one attention module with the low-level feature as the \(\mathbf{Query}\) to the other with the high-level feature as the \(\mathbf{Query}\), which could damage the dominance of the high-level feature that contains the essential abstract and global representation needed for accurate segmentation.
Evaluation in the Disjoint Set-upSome past works [25, 26, 30] also evaluate their methods in the _disjoint_ set-up, which is less realistic as it assumes that all future classes are known and _not_ shown in the background of the current step. For a comprehensive comparison, we also verify the effectiveness of our method in this set-up (Table VII), from which we found the proposed DEFY can consistently improve the baselines, often by large margins.
current SOTA method with significant performance gains. Furthermore, a novel densely-interactive feature pyramid (DEFY) is introduced for a better multi-level feature fusion. The advantage of DEFY lies in its ability to establish per-pixel relationships between pairs of features and facilitate dense feature interaction. Moreover, we demonstrate the proposed DEFY can be easily integrated into three representative existing CISS models as a plug-in to enhance their performance. Especially, equipping the current SOTA method with DEFY results in a new state-of-the-art performance.
| クラスIncrementalSemanticSegmentation (CISS)は近年、実世界への応用における重要な役割を担っているため、注目を集めています。既存のCISS方法が優れた性能を達成しているものの、高階的な知識(特徴)のみを捉え、低階的な特徴の多様な知識を無視しているため、古い知識の保存が不十分であり、新しい知識の探索は不十分です。また、多階的な特徴を用いた知識蒸馏を、重い骨格を再訓練するために計算コストがかかることを、抱えています。本論文では、CISSにおける効率的な多粒度知識再利用を初めて調査し、新しい方法、EvolvingkNowleDge minING (ENDING)を提案します。ENDINGは凍結された骨格を使用します。ENDINGには、進化的な融合とsemanticalエンハンセメントの2つのキーモジュールが含まれています。これらのモジュールにより、多粒度知識の動的かつ |
2306.05748 | Shape-based clustering of synthetic Stokes profiles using k-means and
k-Shape | The shapes of Stokes profiles contain much information about the atmospheric
conditions that produced them. However, a variety of different atmospheric
structures can produce very similar profiles. Thus, it is important for proper
interpretation of observations to have a good understanding of how the shapes
of Stokes profiles depend on the underlying atmosphere. An excellent tool in
this regard is forward modeling, i.e. computing and studying synthetic spectra
from realistic simulations of the solar atmosphere. Modern simulations
routinely produce several hundred thousand spectral profiles per snapshot. With
such numbers, it becomes necessary to use automated procedures in order to
organize the profiles according to their shape. Here we illustrate the use of
two complementary methods, k-means and k-Shape, to cluster similarly shaped
profiles, and demonstrate how the resulting clusters can be combined with
knowledge of the simulation's atmosphere to interpret spectral shapes. We
generate synthetic Stokes profiles for the Ca II 854.2 nm line using the
Multi3D code from a Bifrost simulation snapshot. We then apply the k-means and
k-Shape clustering techniques to group the profiles together according to their
shape. We show and compare the classes of profile shapes we retrieve from
applying both k-means and k-Shape to our synthetic intensity spectra. We then
show the structure of the underlying atmosphere for two particular classes of
profile shapes retrieved by the clustering, and demonstrate how this leads to
an interpretation for the formation of those profile shapes. Furthermore, we
apply both methods to the subset of our profiles containing the strongest
Stokes V signals, and demonstrate how k-Shape can be qualitatively better than
k-means at retrieving complex profile shapes when using a small number of
clusters. | Thore Espedal Moe, Tiago M. D. Pereira, Flavio Calvo, Jorrit Leenaarts | 2023-06-09T08:27:26 | http://arxiv.org/abs/2306.05748v1 | # Shape-based clustering of synthetic Stokes profiles using \(k\)-means and \(k\)-Shape
###### Abstract
Context:The shapes of Stokes profiles contain much information about the atmospheric conditions that produced them. However, a variety of different atmospheric structures can produce very similar profiles. Thus, it is important for proper interpretation of observations to have a good understanding of how the shapes of Stokes profiles depend on the underlying atmosphere. An excellent tool in this regard is forward modeling, i.e. computing and studying synthetic spectra from realistic simulations of the solar atmosphere. Modern simulations routinely produce several hundred thousand spectral profiles per snapshot. With such numbers, it becomes necessary to use automated procedures in order to organize the profiles according to their shape. Here we illustrate the use of two complementary methods, \(k\)-means and \(k\)-Shape, to cluster similarly shaped profiles, and demonstrate how the resulting clusters can be combined with knowledge of the simulation's atmosphere to interpret spectral shapes.
Aims:We aim to showcase the use of clustering analysis for forward modeling. In particular we wish to introduce the \(k\)-Shape clustering method to the solar physics community as a complement to the well-known \(k\)-means method.
Methods:We generate synthetic Stokes profiles for the Ca ii 854.2 nm line using the Multi3D code from a Bifrost simulation snapshot. We then apply the \(k\)-means and \(k\)-Shape clustering techniques to group the profiles together according to their shape, and investigate the within-group correlations of temperature, line-of-sight velocity and line-of-sight magnetic field strengths.
Results:We show and compare the classes of profile shapes we retrieve from applying both \(k\)-means and \(k\)-Shape to our synthetic intensity spectra. We then show the structure of the underlying atmosphere for two particular classes of profile shapes retrieved by the clustering, and demonstrate how this leads to an interpretation for the formation of those profile shapes. Furthermore, we apply both methods to the subset of our profiles containing the strongest Stokes \(V\) signals, and demonstrate how \(k\)-Shape can be qualitatively better than \(k\)-means at retrieving complex profile shapes when using a small number of clusters.
Conclusions:
## 1 Introduction
Forward modeling of the solar atmosphere is a very useful tool for understanding the relative importance of atmospheric components in the formation of polarized spectra, thereby guiding interpretations of observations. By computing synthetic Stokes profiles from realistic 3D radiative magnetohydrodynamic (rMHD) simulations, one can directly compare a particular spectral signature with the full state of the atmosphere that produced it (see e.g. Leenaarts et al., 2013, 2013, 2013) and others in the same series). Modern simulations routinely contain several hundred thousand pixels, with each pixel giving rise to a set of Stokes profiles. Depending on the spatial resolution of the numerical model, and the spectral resolution considered for the synthesis, these profiles can be quite complex; often exhibiting more complicated behavior than what is typically resolved in real observations. It is obviously not feasible to analyze the formation of so many profiles one by one, nor is it practical to manually sort them into groups according to their features. Rather, some automated procedure must be used to organize the profiles in a meaningful manner for further human analysis.
One way of reducing the number of individual profiles into more manageable collections is the use of clustering techniques like \(k\)-means (Steinhaus, 1956; MacQueen, 1967). \(k\)-means has seen extensive use in solar and stellar physics, for examples see Sanchez Almeida & Lites (2000); Pietarila et al. (2007); Viticchie & Sanchez Almeida (2011); Panos et al. (2018); Sainz Dalda et al. (2019); Bose et al. (2019); Kuckein et al. (2020); Joshi et al. (2020); Woods et al. (2021); Nobrega-Siverio et al. (2021); Barczynski et al. (2021); Bose et al. (2021); Kleint & Panos (2022); Mathur et al. (2022); Sainz Dalda et al. (2022). Apart from \(k\)-means, other clustering methods have also been used on solar spectra, for instance the \(t\)-distributed Stochastic Neighbor Embedding employed by (Verma et al., 2021). The purposes of the clustering vary from identifying and studying the observational signatures of particular physical processes and features, to reducing the spatial dimensionality of data-sets for inversions, to statistical characterizations of observations. Relatively little explored, however, is the application of clustering techniques in a forward modeling context, one notable exception being Khomenko et al. (2005). In this paper we aim to address that issue, applying the \(k\)-means method to Ca ii 854.2 nm Stokes \(I\) and Stokes \(V\) profiles generated from a Bifrost (Gudiksen et al., 2011) snapshot using the Multi3D radiative transfer code (Leenaarts & Carlsson, 2009), which has been extended (Calvo & Leenaarts (in prep.)) to include polarization, accounting for the Zeeman effect. We focus on the shapes of the Stokes profiles, aiming to illustrate what different classes of shapes do, or do not, tell us about the underlying atmospheric conditions.
While \(k\)-means is a fast and robust clustering technique, it does not directly cluster profiles based on their shapes. It works
by minimizing the sum of within-cluster Euclidean distances between profiles, which can potentially lead to distinctly different shapes appearing in the same cluster as demonstrated in Fig. 1. Or, for instance, two Doppler-shifted spectral profiles with the otherwise same exact shape can be put into separate clusters. Furthermore, the centroid, or'representative profile' (RP), of a cluster is given as the mean of the profiles belonging to the cluster, which in some cases can give a poor representation of the typical profile shapes in the cluster. Of course, increasing the number of clusters can mitigate this problem, but at the cost of the interpretability, which is the main point of the kind of forward modeling we seek to undertake in this paper.
A relatively fast clustering method that is inherently shape-based is the \(k\)-Shape method of Paparrizos & Gravano (2015). Though originally developed for use on time-series, the method is quite general and we apply it here to the case of Stokes profiles with the obvious substitution of the time axis for a wavelength axis. A feature of \(k\)-Shape is that the clustering is largely independent of Doppler-shifts, which can be beneficial or detrimental depending on the intended usage. By ignoring Doppler-shifts and using a different measure of similarity than \(k\)-means, the profiles are matched more directly according to their similarity in actual shape, rather than being matched according to a combination of shape and wavelength position. Furthermore, as the centroid computation is rather different from the one in \(k\)-means, the RP's are much more prototypical of the clustered profiles. The cost, of course, is that all absolute velocity-information is not considered in the clustering.
## 2 Methods
### Generating synthetic profiles
We generated our synthetic spectra from the 23 km resolution atmospheric model described in Moe et al. (2022). This is a Bifrost model (Gudiksen et al., 2011) with a magnetic field configuration constructed to resemble a coronal hole. The model has \(512\times 512\times 512\) grid points, spanning roughly 12 Mm in the horizontal directions and going from \(z=-2.5\) Mm below up to \(z=8\) Mm above the solar surface. The horizontal spacing of the grid points is uniform, resulting in a horizontal resolution of 23 km pix\({}^{-1}\).
We used an extension (Calvo & Leenaarts (in prep.)) of the Multi3D code (Leenaarts & Carlsson, 2009) with polarimetric capabilities to produce 3D full Stokes profiles of the Ca ii 854.2 nm line accounting for the Zeeman effect. As 3D computations are immensely expensive we cut the bottom 112 grid points, corresponding to below \(-0.4\) Mm beneath the surface, under the assumption that these are too deep to affect the formation of our line of interest. Furthermore, we neglected to include the effects of partial frequency redistribution (PRD) and isotopic splitting. The obtained synthetic profiles were normalized by the nearby continuum, meaning each profile was divided by the Stokes I value of the reddest wavelength in the synthesis at approximately \(\lambda_{0}+0.95\) nm, and interpolated to 100 equidistant wavelength points in the range \(\lambda_{0}\pm 0.05\) nm, where \(\lambda_{0}\) denotes the central wavelength of the line. We performed this interpolation in order to give equal weight to all parts of the profile when clustering since the original wavelength grid used in the synthesis is non-equidistant.
### \(k\)-means clustering
The most common clustering technique for spectral profiles is \(k\)-means clustering. The full set of profiles is divided into \(k\) clusters of similarly shaped profiles, where the number \(k\) must be chosen at the outset. The measure of similarity is the Euclidean distance between profiles; that is, the distance between two profiles is the sum over wavelengths of the squared difference in their amplitudes:
\[distance=\sum_{i}(I_{1}(\lambda_{i})-I_{2}(\lambda_{i}))^{2}, \tag{1}\]
where \(I(\lambda_{i})\) denotes the amplitude of the profile at each wavelength point \(\lambda_{i}\). Each cluster has a centroid, and the goal is to assign the profiles to the \(k\) clusters in such a way that the sum of distances between all profiles and their nearest centroid (often called the inertia) is minimized. Algorithmically, \(k\)-means performs the following steps:
1. Initialize \(k\) centroids, one for each cluster.
2. Assign each profile to the cluster with the closest centroid.
3. Recompute the centroids as the mean (for each wavelength) of the profiles belonging to the cluster.
4. Repeat 2. and 3. until no profile changes cluster, a fixed number of iterations has been performed, or until the total inertia no longer changes above a set tolerance.
It should be noted that the convergence of the \(k\)-means algorithm does not guarantee that a global minimum has been found. Therefore it is common to re-initialize the clustering a predefined number of times, keeping the result with lowest inertia.
In this paper, we have used the \(k\)-means implementation of scikit-learn (Pedregosa et al., 2011), employing the \(k\)-Means++ initialization (Arthur & Vassilvitskii, 2007) for selecting better initial cluster centroids.
Figure 1: Example showing how \(k\)-Shape (left) and \(k\)-means (right) partition a dataset with three distinct signal shapes. While \(k\)-Shape recovers the three distinct classes of shapes, \(k\)-means mixes the class containing a peak and a drop with the class containing only a drop. This illustration is adapted from the documentation of the tklearn-library. 1
### \(k\)-Shape clustering
As the name implies, \(k\)-Shape (Paparrizos & Gravano 2015), is designed to perform a clustering into \(k\) clusters of distinct shape. While the general idea is similar to \(k\)-means, it uses a different metric for the distance between profiles; as well as another method for computing the cluster centroids. The distance metric is based on shifting the profiles across each other and computing the cross-correlation for each possible shift. Consider two profiles \(I_{1}\) and \(I_{2}\), defined on \(m\) wavelength points, written in the form of vectors:
\[\mathbf{I_{1}}=I_{1}(\lambda_{1}),I_{1}(\lambda_{2}),...,I_{1}(\lambda_{m}),\ \ \mathbf{I_{2}}=I_{2}(\lambda_{1}),I_{2}(\lambda_{2}),...,I_{2}(\lambda_{m}). \tag{2}\]
The cross-correlation sequence between these two profiles, \(CC_{w}(\mathbf{I_{1}},\mathbf{I_{2}})\), is defined as:
\[CC_{w}(\mathbf{I_{1}},\mathbf{I_{2}})=R_{w\leftarrow m}(\mathbf{I_{1}},\mathbf{I_{2}}),\ \ \ \ w\in\{1,2,\ldots,2m-1\}, \tag{3}\]
where
\[R_{k}(\mathbf{I_{1}},\mathbf{I_{2}})=\begin{cases}\sum_{l=1}^{m-k}I_{1}(\lambda_{l+k} )\cdot I_{2}(\lambda_{l}),\ \ \ k\geq 0\\ R_{-k}(\mathbf{I_{1}},\mathbf{I_{2}}),\ \ \ k<0.\end{cases} \tag{4}\]
Thus, the sequence \(CC_{w}(\mathbf{I_{1}},\mathbf{I_{2}})\) contains the cross-correlation value for each of the \(2m-1\) possible shifts of the profiles relative to each other; essentially a sequence of the vector dot products between zero-padded \(\mathbf{I_{1}}\) and \(\mathbf{I_{2}}\) for each possible overlapping shift of the profiles. Normalizing the cross-correlation sequence (corresponding to dividing by the Euclidean norm of both profiles):
\[NCC_{c}=\frac{CC_{w}(\mathbf{I_{1}},\mathbf{I_{2}})}{\sqrt{R_{0}(\mathbf{I_{1}},\mathbf{I_{1}}) \cdot R_{0}(\mathbf{I_{2}},\mathbf{I_{2}})}}, \tag{5}\]
results in a number between \(-1\) and \(1\) for each entry in the sequence, where \(-1\) signifies perfect anti-correlation and \(1\) signifies perfect correlation between the profiles. Selecting the entry with the largest cross-correlation value then gives the shape-based distance between two profiles as:
\[distance=1-\max_{w}\Big{(}\frac{CC_{w}(\mathbf{I_{1}},\mathbf{I_{2}})}{\sqrt{R_{0}( \mathbf{I_{1}},\mathbf{I_{1}})\cdot R_{0}(\mathbf{I_{2}},\mathbf{I_{2}})}}\Big{)}, \tag{6}\]
which is bounded between \(0\) and \(2\).
As in \(k\)-means, each profile is assigned to the closest centroid in terms of distance, and the cluster centroid is recomputed. In \(k\)-Shape, however, the refinement of the cluster centroids is done by reformulating the minimization of within-cluster distances as a maximization of a Rayleigh quotient calculation; for details see the original paper (Paparrizos & Gravano 2015). It should, however, be remarked that the \(k\)-Shape method assumes that the profiles have been \(z\)-normalized, meaning each profile has zero mean, and unity standard deviation:
\[\mathbf{I_{1}}^{\prime}=\frac{\mathbf{I_{1}}-\mu_{1}}{\sigma_{1}}, \tag{7}\]
where \(\mu_{1}\) and \(\sigma_{1}\) is, respectively, the mean and the standard deviation of the profile over the \(m\) wavelengths considered. This assumption is not strictly necessary, as the method can be modified to work with other data-normalizations. However, the original authors found the z-normalization to work best in their tests and it is beyond the scope of our current work to re-implement and evaluate the method for other normalizations.
We used the \(k\)-Shape implementation from the tislearn library (Tavenard et al. 2020), with some simple modifications to make it run in parallel. Even so, the \(k\)-Shape method is significantly slower than the \(k\)-means implementation of scikit-learn. In one example case, using \(k=100\) clusters for \(512\times 512\) profiles with 100 wavelength points, one run of \(k\)-Shapes without re-initializations took roughly \(2.7\) hours, while a \(k\)-means run with 10 re-initializations took about \(5\) minutes, both on the same \(32\)-core workstation. It should be noted that in the tislearn implementation of \(k\)-Shape, \(k\) single profiles are randomly chosen as the initial cluster centroids. In the original paper (Paparrizos & Gravano 2015), the initialization is done by randomly distributing all profiles among \(k\) clusters
## 3 Results
### Overview
Our intention was to illustrate and compare the use of both \(k\)-Shape and \(k\)-means for clustering synthetic profiles according to their shape, and subsequently how the resulting clusters can reveal correlations between the typical profile shapes in a cluster and the particular structure of the underlying atmosphere these profiles emerge from. We therefore begin by presenting and discussing the clustering of the intensity profiles in Sec. 3.2, before we perform a detailed examination of two particular profile shapes retrieved by the clustering in Sec. 3.3 and Sec. 3.4.
As the \(k\)-Shape method assumes that its input profiles are \(z\)-normalized, we used the same normalization for the \(k\)-means method in order to do a fair comparison. This turned out to be a reasonable approach for the synthetic intensity profiles, as they have signal values in the same general range. However, the polarized components of the Stokes vector can vary vastly in amplitude, so the \(z\)-normalization can cause tiny signals to appear misleadingly large compared to stronger signals as the amplitude is given in units of the per-profile standard deviation. We have therefore focused mostly on the intensity profiles, though we did perform a clustering of the very strongest Stokes \(V\) signals (those with a signal exceeding \(0.5\%\) of the nearby continuum intensity), which we will discuss in Sec. 3.5.
### Clustering the intensity profiles
We clustered the synthetic intensity profiles into \(k=100\) clusters using both \(k\)-means and \(k\)-Shape, the resulting clusters are shown in Fig. 2 and Fig. 3 respectively. The choice of \(100\) clusters was made after some experimentation, as a reasonable trade-off between the two opposing considerations of accuracy and human interpretability. The \(k\)-means method was run with 10 re-initializations, while the \(k\)-Shape method was run with a single initialization due to being around two orders of magnitude slower. We have tested \(k\)-Shape with 10 re-initializations, which yielded qualitatively very similar results to the single initialization run. We therefore elected to use the single initialization run in order to compare the methods for somewhat more similar run-times.
The first observation we can make is that both clustering techniques seem to recover a similar variety of different profile shapes. These range from typical absorption profiles (e.g. #35 in Fig. 2, #52 in Fig. 3), through increasingly strongly skewed absorption profiles (e.g. #9 and #37 in Fig. 2, #30 and #44 in Fig. 3), to more complicated profiles, including double-peaked profiles (e.g. #98 and #100 in Fig. 2, #97 and #100 in Fig. 3), asymmetric emission profiles (e.g. #73 in Fig. 2, #64 in Fig. 3) and multi-lobed profiles (e.g. #81 in Fig. 2 or #84 in Fig. 3).
The clustering appears to be reasonably tight, and in both methods there are several clusters showing very similar shapes, i.e. there is more than one cluster per 'family' of shapes. Encouragingly, both clustering methods seem to recover all the same types of cluster 'families', e.g. several clusters with similar asymmetric emission peaks or double peaks show up in both clusterings, though there is obviously not a one-to-one correspondence between individual clusters across the methods. Conversely, at first glance there do not seem to be clusters with very distinct shapes found only with one method compared to the other. The most unique-looking clusters are perhaps #56 and #88 in Fig. 2, but even these find quite similar counterparts in #97 and #47 in Fig. 3. This gives us some confidence that our choice of 100 clusters reasonably covers the range of typical profile shapes.
A second observation we can make, is how the retrieved clusters do differ between the methods. The \(k\)-Shape groupings demonstrate the method's insensitivity to Doppler-shifts, especially the clusters containing the asymmetric emission peaks (e.g. #63, #64, #65, in Fig. 3) show the same shape at different shifts grouped together. Conversely, \(k\)-means splits these into different clusters (e.g. #72, #73, #74 in Fig. 2) according to their Doppler shifts. The fact that both methods retrieve the same 'families', but differently distributed over the clusters, can be beneficial for analysis, as we will see in Sect. 3.3. With such a stereoscopic view of the underlying atmospheres it becomes easier to discern by inspection which atmospheric parameters are important and which are incidental for the formation of the particular profile shapes. In particular, \(k\)-Shape's insensitivity to Doppler shifts contrasted with \(k\)-means sensitivity to them, allows one to better discern which atmospheric behaviors are correlated solely with the shape of the profile, as opposed to being correlated to the combination of shape and Doppler-shift.
A third observation relates to how and where the methods perform poorly, in terms of profiles not being a good fit for their assigned clusters. As mentioned, cluster #56 in \(k\)-means does not seem to be well captured by \(k\)-Shape. It turns out that most of the profiles from this cluster are assigned to #68 and #73 in Fig. 3. These profiles are on the whole quite different from their assigned \(k\)-Shape centroids, but when the profiles and the centroids are shifted drastically across each other, the overlapping parts agree sufficiently for them to be grouped together. As \(k\)-Shape computes all possible shifts, it may occasionally find large shifts (and thereby a large clipping of the signal) to be the least bad option, leading to such apparently poor assignments. That type of signal clipping does not happen with \(k\)-means.
On the other hand, the \(k\)-means clusters appear to have issues distinguishing profiles where there is a large difference in signal
Figure 2: \(k\)-means clusters for synthetic Ca ii 854.2 nm intensity profiles, using 100 clusters on \(z\)-normalized profiles. The red line is the cluster centroid profile (average), while the black lines are all individual profiles assigned to each cluster. The grey line is a visual aid that denotes the position of \(\lambda_{0}\)
strength over a narrow wavelength region. For instance, the \(k\)-means cluster #79 in Fig. 2 turns out to be a mix of profiles with enhanced shoulders on either the right side or on both sides of the line core, as well as some with only a weakly enhanced right shoulder followed by a second absorption feature to the right. In the \(k\)-Shape clustering vast majority of these profiles are assigned to #77, #78, #94 and #95 in Fig. 3.
To summarize, neither method performs ideally, in the sense that both have clusters where some members that are rather poorly represented by the centroids. The obvious way to improve the fidelity of the clusters is to increase the number of clusters, or possibly do more re-initializations. However, the methods seem to complement each other, each to an extent balancing out the others weaknesses, and are useful as starting points for human analysis.
### CBG-like profiles
As an example of the sort of analysis facilitated by these kinds of clustering techniques, we decided to perform an in-depth examination of the family of asymmetric blue-lobed single-peaked Stokes I profiles found in Fig. 2 (exemplified by cluster number #70 and #72) and Fig. 3 (exemplified by cluster number #64 and #65). These profiles are reminiscent of the chromospheric bright grains (CBGs) seen in the Ca ii H and K lines, see for instance (Carlsson and Stein, 1997; Mathur et al., 2022) and references therein, so we call them CBG-like.
Fig. 4 shows the Stokes I and Stokes \(V\) signals (with each profile normalized to its nearby continuum Stokes I value), as well as the stratification of temperature, line-of-sight velocity, and line-of-sight magnetic field strength for all the profiles belonging to \(k\)-means cluster #70 and #72. The atmospheric quantities are plotted as a function of the logarithm of optical depth for radiation at wavelength 500 nm (5000 A), \(\log(\tau_{5000})\). Throughout this paper we use the convention that positive heights, velocities and vertical magnetic field components point outwards from the solar surface. Each row of Fig. 4 corresponds to one cluster, and the profiles are stacked along the vertical axis for each panel. The \(k\)-Shape clusters #64 and #65 are shown in a similar fashion in Fig. 5.
Looking at the intensities we see that the clusters are indeed well constrained for the most part. The \(k\)-means method produces clusters where the emission peak is at approximately the same wavelength throughout each cluster, but with some variance in the other features of the profile shapes. The \(k\)-Shape method, on the other hand, retrieves clusters where the location of the emission peak varies considerably in wavelength, but the shapes in each cluster seem more consistent in their shapes. For
Figure 3: \(k\)-Shape clusters for synthetic Ca ii 854.2 nm intensity profiles, using 100 clusters on \(z\)-normalized profiles. The red line is the cluster centroid from \(k\)-Shape, and the black lines are all individual profiles assigned to the cluster. The grey line is a visual aid that denotes the position of \(\lambda_{0}\)
instance, the wavelength distance of the slope from peak to bottom seems to be more regular, and the red-side absorption features show less variance.
As for the Stokes \(V\) profiles, with both methods the wavelength positions of the strongest Stokes \(V\) signals seem to coincide with the sharpest changes in the intensity as one might expect from the Weak Field Approximation. There does not, however, seem to be any other universal tendencies in Stokes \(V\) across all the CBG-like clusters. Similarly for the stratification of the line-of-sight magnetic field strengths, there do not appear to be clear tendencies neither within nor across the clusters. This suggests that the structure of the vertical magnetic field component does not play a direct role in the formation of these CBG-like Stokes I profiles.
What does seem to be common to all the clusters, and therefore important for the formation of these profile shapes, is the depth-stratification of temperature and line-of-sight velocities. Mostly we see a temperature increase in the atmosphere, followed by a large velocity gradient slightly higher up. Mostly this manifests as upflowing material from below meeting downflowing material from above, but not exclusively as there are some instances of faster downflows from above meeting slower downflows, i.e. there is not necessarily a sign change in the vertical velocity, but there is a significant change in speed.
That the temperature increase occurs deeper in the atmosphere than the velocity gradient, as well as the fact that the absolute values of the velocity are less important for the formation of these shapes than the presence of a strong gradient, is more easily seen with the \(k\)-Shape clusters as each of them contains the CBG-like profile shapes at a range of Doppler shifts. In any case, the correlation between the temperature increase, the velocity-gradient and the profile shape is certainly made clearer when comparing the results of both clustering methods.
In terms of explaining the formation of these profiles, we are reminded of the interpretation of Ca ii K and H bright grains provided in (Carlsson & Stein, 1997) as signatures of acoustic shocks propagating upwards through the chromosphere, with the asymmetry being caused by Doppler shifts of the opacity across the shock front. The increased temperature enhances the local source function, which produces enhanced emission. The velocity gradient to more rapidly downflowing material above the heating event causes an opacity shift as the absorbing material is shifted to redder wavelengths, letting the bluer part of the profile escape while attenuating the redder part.
A point of note is that the correlation between the atmospheric structure and the CBG-like profile shapes is apparent straight from the clustering when we have access to underlying atmosphere. This allowed a qualitative interpretation of the profiles' formation without having to resort to using response functions or contribution functions, which are ill-defined for the case of 3D radiative transfer.
### Double peaked profiles
As another example, we now consider the double peaked profiles seen in \(k\)-means clusters #98 and #100, and in \(k\)-Shape
Figure 4: Stokes \(I\) and \(V\) profiles for two clusters, along with some atmospheric parameters for their simulation columns. Here showing the \(k\)-means clusters #70 (_top row_) and #72 (_bottom row_) from Fig. 2, both of which have CBG-like profiles. All profiles for each cluster are stacked along the vertical axes of the plots, so the y-axis merely counts the profile number. The left column shows the continuum-normalized intensity versus wavelength from line core. The second-from-left column shows the continuum-normalized Stokes \(V\) profiles. The last three columns show, respectively, the temperature, the line-of-sight velocity, and the line-of-sight magnetic field strength, as a function of \(log(\tau_{5000})\).
Figure 5: Same as Fig. 4, but for the \(k\)-Shape clusters #64 (top) and #65 (bottom) from Fig. 3.
Figure 6: Same as Fig. 4, but for the \(k\)-means clusters #98 (top) and #100 (bottom) from Fig. 2.
clusters #97 and #100. Similar to Figs. 4 and 5, the continuum-normalized intensity and Stokes \(V\) signals, as well as the height-stratified temperature, line-of-sight velocity, and line-of-sight magnetic field strength for all the individual profiles in each cluster is shown in Figs. 6 and 7 for the \(k\)-means and \(k\)-Shape clusters, respectively.
Once again the clusters, on the whole, seem fairly well constrained regarding the shape of the intensity profiles. Here, there seems to be a larger variation in the absolute values of the intensities compared to the previous example. This sort of variation is not unexpected; since the \(z\)-normalization scales each profile independently to have a standard deviation equal to one our clusters are relatively insensitive to amplitudes, focusing instead on the shapes. Comparing the methods, we see they mostly recover the same profiles. An exception is that the \(k\)-means cluster #98 in the top row of Fig. 6 has some unique profiles around profile number 300 which appear to have either a very weak left peak or only a single peak on the right, followed by a prominent absorption feature to the right of the rightmost peak. Looking at the temperature and velocity structure for these atypical profiles with suppressed left peaks, it appears they have a temperature enhancement coinciding in height with a moderate downflow. This temperature enhancement persists upwards through a velocity gradient to a region of strong upflow, before it hits a very strong downflow. Their formation can potentially be explained in the same manner as the CBG-like profiles; but with an oppositely signed velocity gradient, and with the strong downflow above the upflow causing the additional strongly redshifted absorption feature.
Returning to the general behavior of the clusters, we find that the Stokes \(V\) profiles seem to behave as expected from the weak-field approximation, in that they follow the behaviour of the intensity profiles. There is, however, a rather interesting region between profile number 200 to 300 in the bottom row of Fig. 6, where the rightmost Stokes \(V\) signal is very low despite a gradient in the intensity, and the vertical magnetic field component has a sign change around \(\log\tau_{5000}=-4\).
The temperature structure of the atmosphere is more varied for the double peaked profiles, compared to the CBG-like profiles. There are both regions of temperature enhancements with little variation spanning decades in \(\log\tau_{5000}\), and hot regions bounded by colder plasma above and below. The common feature for all these double peaked profiles is enhanced temperatures in the range of \(-5<\log\tau_{5000}<-3\). That was also the case for the CBG-like profiles, though the CBG-like profiles seldom showed these colder layers above the first strong temperature increase.
The vertical velocities are also rather varied in their structure, but three general features stand out compared to the CBG-like profiles from before. Firstly, the shift from upflows (or weak downflows) to strong downflows at the top tends to occur at a higher point in the atmosphere. Secondly, the starting points for the temperature enhancements coincide with slower plasma velocities and weaker velocity gradients, as opposed to the CBG-like profiles where the temperature increase starts slightly below strong velocity gradients. Thirdly, we note that the second velocity layer from the top, roughly \(-5.5<\log\tau_{5000}<-4.5\), typically shows low to moderate velocities and fairly modest gradients. As such, the effect of opacity shifting in this layer is less, and both intensity peaks due to the temperature enhancements survive.
Another noteworthy point, is that when these double peaked profiles do have downflows from the top extending deeper (to \(\log\tau_{5000}\approx-5.5\)), the downflows are very strong and there is a corresponding absorption feature on the red side of the reddest
Figure 7: Same as Fig. 4, but for the \(k\)-Shape clusters #97 (top) and #100 (bottom) from Fig. 3.
peak. A possible interpretation is that the previously discussed opacity shifting is so red-shifted in those cases, that it overshoots the red peaks from the slower flowing regions and therefore does not suppress them.
Interestingly, and contrasting with the CBG-like profiles, the vertical component of the magnetic field does in many of these double peaked profiles display some correlations with the vertical velocities and temperature stratifications. To wit, there are areas of Figs. 6 and 7 where the velocities change signs coinciding with an appreciable gradient in vertical field strength to more negative (downward) values. Furthermore, the starting heights of the temperature increases coincide with the appearance of the stronger vertical magnetic field components; particularly obvious examples are profiles number 100 through 200 in the bottom row of Fig. 6, and profiles number 300 through 500 in the top row of Fig. 7.
In summary, these double peaked profiles seem to arise from a range of different atmospheric conditions. The common features are increased temperatures in the low chromosphere/upper photosphere, coinciding with low or modest velocities and weak velocity gradients. This, combined with cospatial enhanced vertical magnetic field strengths, suggests that these profiles are not all caused solely by acoustic shocks, in contrast with the CBG-like profiles. Whether the cause of the heating is due to a magnetic phenomenon, or if we simply see already hot plasma being transported, is unclear from this analysis.
### The strongest Stokes \(V\) profiles
We have so far focused on the clustering of intensity profiles, since the \(z\)-normalization scaled Stokes \(V\) signals of very different amplitudes to a misleadingly similar range. Many of our Stokes \(V\) profiles contained only very weak signals, and clustering according to the shapes of such weak signals should not be expected to provide much diagnostic information. However, by restricting ourselves to look only at the Stokes \(V\) profiles containing an (unsigned) amplitude larger than 0.5% of the nearby continuum intensity we could perform a clustering on profiles with similar strengths. Out of our \(512\times 512\) synthetic profiles, only 7054 (\(\approx 2.7\%\)) matched that selection criterion. The results of \(k\)-means and \(k\)-Shape clustering with \(k=20\) clusters on this subset of Stokes \(V\) profiles are shown in Fig. 8 and Fig. 9 respectively.
In this case, we deliberately selected a rather low number of clusters. This was partly done to avoid having clusters with very few members considering our reduced dataset, and partly to compare the performance of the two methods when using a very limited, and possibly too low, number of clusters. It is obvious from looking at Figs. 8 and 9 that 20 clusters is not sufficient to capture all the complexities present in the profiles with either method, though the clusterings do reproduce the primary features of the profiles.
Comparing these two clustering results reveals some interesting differences. Most noticeably, not all shapes are common to both methods. The double peaked Stokes V profiles of cluster number #8 and #10 in the \(k\)-Shape result are not retrieved as a separate class by the \(k\)-means method; instead they are mixed into most of the \(k\)-means clusters, though primarily into #1, #7, #10, #12 and #17. On the other hand, the valley-peak-valley shape apparent in cluster number #16 from the \(k\)-means method does not appear in the \(k\)-Shape case. Looking in more detail at the individual profiles comprising that cluster, we find almost no profiles with a shape similar to that of the cluster mean. The triple-lobed shape of the cluster mean (marked in red) is instead mostly a mix of valley-peak and peak-valley shapes. In this case, the \(k\)-Shape centroids are more faithful representations of the shapes picked up by each cluster.
In general, the clusters found by \(k\)-means contain one dominant feature, like a peak, a dip, or both, at a certain wavelength position with considerable variation in the rest of the signal. Furthermore, looking at cluster #13 or #16 in Fig. 8 we see that when the dominant feature in the cluster is multi-lobed, it might actually be a mix of single-lobed and multi-lobed signals grouped together, so long as their lobes occur at the same wavelength. This type of shape-mixing does not happen as readily with \(k\)-Shapes, contrast \(k\)-means cluster #13 with \(k\)-Shape cluster #15 and #17. Also, \(k\)-Shape seems to retrieve profiles with more community also at the weaker parts of the signal; compare for instance \(k\)-means clusters #5, #10 and #19 with \(k\)-Shape clusters #1, #5 and #13. \(k\)-Shape does, however, occasionally struggle when excessive shifts of the signal causes clipping of the features at the edges, which can be most easily seen in cluster #1, #19 or #20 of Fig. 9. While it is by no means perfect, we find, in conclusion, that \(k\)-Shapes performs markedly better than \(k\)-means at identifying shapes with this particular combination of complex signals and low number of clusters. How well that observation generalizes to other datasets, or cluster numbers, or both, is not clear, and beyond the scope of the current work. It does, however, indicate the type of problems where \(k\)-Shape can potentially provide an advantage over \(k\)-means. As a note, we have also performed this clustering experiment with \(k\)-means on the continuum-normalized Stokes V profiles and found that their behavior is very similar to the \(z\)-normalized case discussed above.
## 4 Discussion and Conclusions
We have used the \(k\)-means and \(k\)-Shape clustering techniques to group according to profile shape synthetic Ca ii intensity and Stokes \(V\) profiles, generated by 3D radiative transfer calculations from a 3D MHD simulation.
Using \(k=100\) clusters for the intensities resulted in both methods retrieving qualitatively similar 'families' of clusters. While the \(k\)-means method produced clusters whose features were strongly coherent with regard to wavelength, the \(k\)-Shape method, being insensitive to Doppler shifts, produced clusters where the same shape appeared over a range of wavelength shifts. Regarding the methods' shortcomings, we found that \(k\)-Shape occasionally would mislabel some profiles by clipping the signals at the edges when comparing across Doppler shifts, while \(k\)-means at times would lump rather differently shaped profiles together so long as their strongest feature occurred at the same wavelength.
Armed with full knowledge of the simulation's atmospheric parameters, we took an in-depth look at a particular set of profile shapes and arrived at an explanation of their formation by looking at the correlations in the underlying atmospheric structure. We remark that the most interesting aspect of this exercise was not the description itself of how those profile shapes are formed, but rather how we arrived at it. In that use case, there did not appear to be much benefit in using one method over the other in terms of the results; though \(k\)-means was significantly quicker computationally. However, we do note that using both methods gave a stereoscopic view of the data, making it easier to determine which atmospheric quantities were important.
Doing a clustering analysis of the Stokes \(V\) profiles, based on their shapes, proved difficult due to the large variations in signal strength being masked by the \(z\)-normalization required by \(k\)-Shape, causing strong and weak signals to appear deceivingly
similar. Restricting ourselves to a subset of the strongest Stokes \(V\) profiles, we performed a clustering with \(k=20\) clusters using both methods. We found that the methods showed the same tendencies as with the intensity, but more strongly pronounced due to the lower number of clusters and more complex shapes. In this setting we found that \(k\)-means clearly performed qualitatively worse than \(k\)-Shape at creating clusters with coherent shapes; though is difficult to quantitatively compare the methods since they use very different metrics.
In conclusion, \(k\)-Shape seems interesting for use cases where one wants human interpretation and small numbers of clusters. Another interesting possibility is to use the \(k\)-Shape distance metric to search an observation or simulation for the profiles with shape most similar to a certain prototype, for example when trying to detect Ellerman bombs. We want to stress that \(k\)-Shape is, however, not at all suited to usage cases like (Sainz Dalda et al., 2019), where the purpose of clustering is to speed up inversions, as the centroids found by \(k\)-Shape do not correspond to a definite Doppler-shift nor to an absolute intensity. In those cases,
Figure 8: \(k\)–means clusters for Stokes \(V\) profiles, using 20 clusters on \(z\)-normalized Stokes \(V\). The red line is the cluster centroid (average), and the black lines are all individual profiles assigned to the cluster. The blue line is a visual aid that denotes the position of \(\lambda_{0}\). The bottom two rows show all the \(z\)-normalized Stokes \(V\) profiles belonging to the corresponding clusters in the two top rows, with the individual profiles stacked along the vertical axis. It should be noted that the clusters are not equally populated, so the grey-scale maps will have different densities of profiles along the vertical axis.
means is the better option, and one can easily increase the number of clusters beyond what a human can reasonably process. For a qualitative clustering, aimed towards human interpretation and with a comparatively small number of clusters, we find that \(k\)-Shape can be a useful complement to, and sometimes better than, the more well-known \(k\)-means method.
###### Acknowledgements.
The authors wish to thank Mats Carlsson for providing the Bifrost atmosphere used in this paper. We also wish to thank the anonymous referee for comments and suggestions that improved the clarity of this manuscript. This work has been supported by the Research Council of Norway through its Centers of Excellence scheme, project number 262622. Computational resources have been provided by UNINET Sigma2 - the National Infrastructure for High Performance Computing and Data Storage in Norway. The computations were enabled by resources provided by the Swedish National Infrastructure for Computing (SNIC) at the PDC Centre for High Performance Computing (PDC-HPC) at the Royal Institute of Technology partially funded by the Swedish Research Council through grant agreement no. 2018-05973.
| Stokesプロファイルの形状は、それらが生成された気象条件に関する非常に多くの情報を含んでいます。しかし、さまざまな気象構造は非常に似通ったプロファイルを生み出すことができます。したがって、観察の適切な解釈には、Stokesプロファイルの形状が下地気象に依存する点について、良い理解が求められます。この分野で優れたツールの一つは、逆算モデルです。つまり、現実的な太陽風シミュレーションから合成スペクトルを計算し、そのスペクトルを研究することです。現代のシミュレーションは、スナップショットごとに数百万個のスペクトルプロファイルを生成しています。そのような数が得られると、自動化された手順を用いることが必要になります。この手順を用いて、プロファイルの形状に応じて分類する必要があります。ここでは、k-meansとk-Shapeという2つの補完的な方法を用いて、形状が類似しているプロファイルをグループ化し、その結果のクラスをシミュ |
2304.02886 | Automatic ICD-10 Code Association: A Challenging Task on French Clinical
Texts | Automatically associating ICD codes with electronic health data is a
well-known NLP task in medical research. NLP has evolved significantly in
recent years with the emergence of pre-trained language models based on
Transformers architecture, mainly in the English language. This paper adapts
these models to automatically associate the ICD codes. Several neural network
architectures have been experimented with to address the challenges of dealing
with a large set of both input tokens and labels to be guessed. In this paper,
we propose a model that combines the latest advances in NLP and multi-label
classification for ICD-10 code association. Fair experiments on a Clinical
dataset in the French language show that our approach increases the $F_1$-score
metric by more than 55\% compared to state-of-the-art results. | Yakini Tchouka, Jean-François Couchot, David Laiymani, Philippe Selles, Azzedine Rahmani | 2023-04-06T06:31:54 | http://arxiv.org/abs/2304.02886v1 | # Automatic ICD-10 Code Association: A Challenging Task on French Clinical Texts
###### Abstract
Automatically associating ICD codes with electronic health data is a well-known NLP task in medical research. NLP has evolved significantly in recent years with the emergence of pre-trained language models based on Transformers architecture, mainly in the English language. This paper adapts these models to automatically associate the ICD codes. Several neural network architectures have been experimented with to address the challenges of dealing with a large set of both input tokens and labels to be guessed. In this paper, we propose a model that combines the latest advances in NLP and multi-label classification for ICD-10 code association. Fair experiments on a Clinical dataset in the French language show that our approach increases the \(F_{1}\)-score metric by more than 55% compared to state-of-the-art results.
natural language processing, icd-10, clinical document, unstructured data, multi-label classification, supervised learning, health, transformers
## I Introduction
For a more accurate long term follow-up, patient's stay in a health center is usually reported in a digital documents which constitute the patient's medical record. Written by the patient's physicians, it is composed of operating reports, clinical notes, liaison letters etc. In a large number of countries each patient record is then classified according to the International Classification of Diseases (ICD). ICD is a medical classification system used for coding diseases and other health-related conditions. It is maintained by the World Health Organization (WHO) and is widely used globally as a standard for tracking health statistics, billing for medical services, and conducting research. In its \(10^{th}\) edition [21], ICD is organized into chapters based on different body systems and disease categories. The chapters are further divided into subcategories that provide more specific information about the condition being coded. Each code consists of an alphanumeric string that includes a category code, a subcategory code, and a descriptor code (up to seven characters). The ICD-10 classification system is used by healthcare providers and organizations worldwide to standardize the coding of medical conditions and facilitate the sharing of health information across different systems and platforms. This classification is the common foundation for epidemiology, public health research, and healthcare management. In addition, the reimbursement of medical expenses by public or private insurance companies directly depends on the codes associated with these medical records. This makes it even more important to associate the right codes with each patient's record. Finally, it should be noted that a more or less complex patient record can generate several ICD-10 codes
Typically, in a hospital, the responsibility for the ICD-10 classification falls on the medical coders. Staff performing this task is specially trained professionals who use medical documentation to assign the appropriate ICD-10 codes to medical records. Medical coders work closely with healthcare providers, nurses, and other staff to ensure that the medical records are accurately encoded into this classification. In some hospitals the ICD-10 classification is performed by the physicians. However, regardless of how medical coding is managed, accuracy, and attention to detail are crucial to ensure that the data generated is reliable and useful for patient care and management. That is why automatically associating ICD codes to a medical record is a task that has been widely addressed in medical research in recent years [6, 2, 28, 9, 11].
With the recent advances in Natural Language Processing (NLP) and since medical records are unstructured medical documents, it makes sense to apply these theoretical and technological advances in the context of ICD-10 classification. Clearly, the emergence of the Transformers architecture [27, 10] has taken natural language processing to a new precision level. Several works have shown that the representations produced by these models are the most accurate and it is the most used architecture today in a large number of machine learning tasks (from text, to computer vision and time series) [27, 13, 16].
Nevertheless, ICD-10 automatic classification is a multi-label text classification task with tough challenges. For instance, the ICD-10 classification consists of about 140,000 codes (procedure codes and medical codes). Unless one has a huge dataset, extremely important physical resources, and an extremely long period of time, it seems to be unrealistic to believe that one could associate to a patient record one of the 140,000 existing codes with a high degree of accuracy.
This large number of labels clearly stresses existing deep learning models to their limits. Another challenge is the size of the medical notes which far exceeds the usual limit of transformer architectures (typically \(512\) tokens). Finally, working on non-English data is also challenging since the
vast majority of open-source models available are trained on English corpus.
In this paper, we propose to address the three previous challenges for the ICD-10 classification of French medical records. We developed a deep learning model that combines the latest advances in Natural Language Processing. This approach makes it possible to associate a non-negligible part of the existing ICD-10 codes on French-language patient records with an \(F_{1}\)-score outperforming with more than 55% latest state of the art approach.
This paper is organized as follows. Section II starts with recalling state of the art of associating ICD codes to medical records. Section III presents the dataset used on the one hand to validate our approach and on the other hand to fairly compare the \(F_{1}\)-scores obtained by our approach with those obtained with already existing approaches. The architecture of our ICD code association model is presented in Section IV. Results are presented and analyzed in Section V. Concluding remarks and future work are finally given in Section VI.
## II Related Work
### _Natural Language Processing_
NLP has significantly evolved in recent years with the joint appearance of the Transformers model [27] and their generalization ability to transfer learning. ELMo [23] and BERT [10] have shown this effectiveness which provides more accurate contextualized representations. Several pre-trained models then appeared such as BERT, RoBERTa [18].... These models are pre-trained on a large amount of general domain English text to capture the ability to model text data, and then refined on common classification tasks. In French two main models have been proposed i.e FlauBERT [14], CamemBERT [19]. Note that some multi-lingual models also exist such as XLM-R [7]. Some models are also trained on domain-specific text corpus. For example, ClinicalBERT[1] and BioBERT[15] have been trained on medical data to address medical domain tasks. Unfortunately, there is no such model in the French language, leading to a gap between the usage of machine learning approaches on French documents compared to the same approach in English ones. In general, Transformers models have a limited input size (\(512\) tokens in pratiee). In the case of clinical documents this limit can become very penalizing since a typical patient document is generally much larger than \(512\) words or tokens. In [22] the authors proposed some hierarchical methods to tackle this problem. They divided the document into several segments that can be processed by a Transformers. Then the encoding of the segments is aggregated into the next layer (Linear, recurrent neural networks or other layer of Transformers). Recently, the sparse-attention system i.e. the _LongFormer_ model has been proposed in [3]. It is composed of a local attention (attention between a window of neighbour tokens) and a global attention that reduces the computational complexity of the model. They can therefore be deployed to process up to 4096 tokens.
### _ICD Code Association_
The automatic association of ICD codes is one of the most addressed challenges in medical research. With the emergence of neural networks and the evolution of natural language processing, several authors have tried to tackle this task. [6] and [2] used recurrent neural networks (RNNs) to encode Electronic Health Records (EHR) and predict diagnostic outcomes. On the other hand, [25] and [20] have used the attention mecanism with RNNs and CNNs to implement more accurate models.
The work of [29] and [26] present various ways to consider the hierarchical structure of codes. [29] used a sequence tree LSTM to capture the hierarchical relationship between codes and the semantics of each code. [5] proposed to train the integration of ICD codes in a hyperbolic space to model the code hierarchy. They used a graph neural network to capture code co-occurrences. LAAT [28] integrated a bidirectional LSTM with an attention mechanism that incorporates labels.
EffectiveCAN [17] used a squeeze-and-excitation network and residual connections as well as extraction of representations from all layers of the encoder for label attention. The authors also introduced focal loss to address the problem of long-tail prediction with \(58.9\)% of \(F_{1}\)-score on MIMIC 3 [12]. ISD [30] used shared representation extraction between high frequency layers and low frequency layers and a self-distillation learning mechanism to mitigate the distribution of long-tailed codes.
Recently [11] proposed the PLM-ICD system that focuses on document encoding with multi-label classification. They used an encoding model based on the Transformers architecture adapted to the medical corpus. Associating ICD-10 codes is finding the codes corresponding to medical documents in a large set of codes. For instance MIMIC 3 [12] contains more than 8,000 codes, and handling such large set of labels in classification is a challenging problem in machine learning. To overcome this problem, the authors used the Label-Aware Attention (LAAT) mechanism proposed in [28] which integrates labels in the encoding of documents. Finally, to solve the problem of long sequences they used the hierarchical method. PLM-ICD is the current state-of-the-art model that achieved \(59.8\)% of \(F_{1}\)-score on MIMIC 3 [12] and \(50.4\)% on MIMIC 2 [24].
In French, [9] proposed Convolutional Neural Networks (CNN) models with multi-label classification to automatically associate ICD-10 code. The authors used FastText [4] vectors with the skip-Gram algorithm for the encoding of documents. They first considered all the final labels of the dataset, then grouped them into families to reduce the number of classes. This model is trained on a private dataset of 28,000 clinical documents and reached \(39\)% of \(F_{1}\)-score with 6,116 codes and \(52\)% with 1,549 codes.
## III Dataset
This work is in collaboration with The Hopital Nord Franche-Comte (HNFC), a French public health center that provided us with patient stays. For privacy reasons, all our
experiments were conducted on site and no data was taken out of the hospital.
A patient's stay is a set of successive visits in possibly different departments of the hospital. Each department produces a clinical document that describes the patient's stay in that department. These clinical documents are used by the medical coding specialists to associate the corresponding ICD-10 codes. We finally obtain a set of unstructured textual documents corresponding to the global stay of the patient to which a set of codes is associated. As clinical documents, we have for example operating reports, discharge letters, external reports or clinical notes. The obtained dataset, further denoted as ICD-10-HNFC dataset is therefore a database of groups of medical documents with associated codes. This system is well illustrated in Fig. 1.
ICD-10-HNFC dataset is built for supervised deep learning. In supervised learning, to have an accurate model, there are several factors to consider. Is there enough training data? Is the number of classes consistent with the volume of data available? Is the frequency of classes in the dataset balanced? It is always difficult to find the perfect dataset that answers all these questions. In this paper, we worked not only on the main dataset, which consists in associating the raw ICD codes it contains but also on the sub-datasets such as associating the most frequent codes or code families instead of the raw codes.
### Class Reduction
As mentioned, the ICD is a classification system that is composed of thousands of codes. Given the large number of labels present in our basic dataset (shown in Table I), it is difficult to approach this classification task by considering all the classes present. By doing so, the results of the constructed model will be far from perfect. The most precise models to date in English for ICD-10 code association is PLM-ICD which reached \(59.8\)% on MIMIC 3 with 8,922 labels [12] and \(50.4\)% on MIMIC 2 with 5,031 labels [24]. This proves the difficulty of this task. The first sub-dataset consists in reducing the codes to the first \(3\) characters seen as a family. Therefore, instead of considering the raw codes, we will group them into families. This reduces in a consequent way the number of classes to be treated by the model. This dataset is presented in Table I. We can see via the description "line Code with less than 10 examples" in Table I that the reduction of the classes not only allows to have a more reasonable number of classes but also increase the frequency of the codes in the dataset.
### Code Frequency
Associating ICD-10 codes is a very frequent task in health centers. As a result, some codes occur more frequently than others. Finding the most frequent codes automatically can only be useful. Our second sub-dataset consists in building models based on the number of codes (\(K\)) that we consider more relevant. We evaluate the relevance based on the frequency of the code in the dataset. Thus, a model built on such a dataset will be able to associate the integrated codes with a better classification performance.
### Additional Label
With the code frequency strategy (\(K\)), the dataset is therefore composed of entries whose association belongs only to the \(K\) most relevant codes. To keep the coherence of our dataset an additional label is introduced to represent the codes which are not considered as relevant (least frequent codes). So instead of having \(K\) classes, the model will have \(K+1\) ones. The additional class mean concretely in the association that there are one or more additional codes to associate.
## IV Model Architecture
This section presents the different components of the model architecture we have developed and justifies the choices made to design it. As previously exposed, as we deal with the French language our choice was to fine-tune pre-trained transformer-based French models i.e. CamemBERT [19] and FlauBERT [14] for the implementation of the model architecture.
### _Global Document Representation_
As mentioned, Transformers main constraint is the limitation of the number of tokens present in an input sequence. Since the average size of the clinical notes of ICD-10-HNFC dataset exceeds this limit (\(747\) versus \(512\) as shown in Table I), basic Transformers cannot be used. Recently, [8] summarized the available methods for processing long sequences via Transformers. They can be summarized as hierarchical Transformers and sparse-attention Transformers in which we can find the _Longformer_ model of [3] early mentioned. _Longformer_ can process up to \(4096\) tokens per sequence allows to meet this limit. Unfortunately, there is no French pre-trained _Longformer_
Fig. 1: ICD-10-HNFC Dataset Construction
model to date. Therefore, in this paper, we will use the hierarchical method to tackle this problem.
Hierarchical Transformers[22, 8] are built on top of Transformers architecture. A document \(D\), is first divided into segments \([t_{0},t_{1},\ldots,t_{|D|}]\), each of which must have less than \(512\) tokens (the limit of Transformers). These segments are encoded independently using a typically pre-trained Transformers. We then obtain a list of segment representations which must be aggregated to obtain the whole document \(D\) representation. There are several ways to do this aggregation. The aggregator can be an average of the representations of all the segments of the document (mean pooling) or the maximum of the representations in each dimension of the segments (max pooling) or stacking the segment representations into a single sequence. The aggregated sequence thus serves as an input to the next layer.
#### Iv-A1 Classification of a Large Number of Labels
To overcome the problem of a large set of labels since ICD-10-HNFC contains more than 6,000 codes, we used the Label-Aware Attention (LAAT) system as in [11]. LAAT consists in integrating the labels into the document representation. Label-Aware Attention captures important text fragments related to certain labels. Let \(H\) be the stacking representation of an input sequence. First, a label-wise attention weight matrix \(Z\) is computed as follows:
\[Z=tanh(VH)\]
\[A=softmax(WZ)\]
where \(V\) and \(W\) are linear transforms. The \(i^{th}\) row of \(A\) represents the weights of the \(i^{th}\) label. The softmax function is performed for each label to form a distribution over all tokens. Then, the matrix \(A\) is used to perform a weighted-sum of \(H\) to compute the label-specific document representation:
\[D=HA^{T}\]
The \(i^{th}\) row of \(D\) represents the document representations for the \(i^{th}\) label. Finally, \(D\) is used to make predictions by computing the inner product between each row of \(D\) and the related label vector.
In this paper, several architectures were experimented such as the model without long sequence processing, the model with long sequence processing (max/mean pooling), and the model with LAAT. The global architecture is illustrated in Fig. 2.
## V Experiments and Analysis
In this section, we present the results of the experiments conducted with the previously detailed architectures and dataset. We compare the results of recent works (PLM-ICD[11], CNN[9]) on the association of ICD-10 codes with
the most precise model of this paper. To evaluate our model we use the most used performance measures in classification: Precision, Recall, \(F_{1}\)-score. The micro average system is used to obtain the aggregation of the performances.
### _Paper Models_
First, we conducted the experiments on the ICD-10-HNFC dataset with class reduction (\(1564\) labels) as detailed in Table 1. This experimentation is performed with all the architectures developed in this paper. They are listed in Table II. Then we trained another model on the global ICD-10-HNFC dataset (\(6101\) labels) with the architectures that obtained the highest \(F_{1}\)-score in the previous experiment. The results are shown in the Table II. The results confirm the effects of the different components that constitute our architectures. In summary, the LAAT approach outperforms the hierarchical methods with are better than the base truncated model.
### \(K\)_-based Models_
As detailed in Section III, different models have been trained based on a number (\(K\)) of labels (i.e. the most frequent codes). We present here the evaluation of these models with \(K\) in \([10,50,100,200]\). As shown in Table III, models are less and less accurate when we increase the number of labels (classes). This is simply due to the aggregation of performances. The more different codes there are, the fewer instances of each code there are in the dataset, and the less easy the contextualization is.
### _Comparison with other Works_
The Table IV shows the model with the highest \(F_{1}\)-score of this paper with the results of previous work on ICD-10 code association. It is difficult to compare the results, since these works do not use the same evaluation dataset and English works can benefit from specialized models such as ClinicalBERT [1]. For French baseline, we implemented and trained the model proposed in [9] on ICD-10-HNFC dataset. The result is shown in parallel with our proposal. Our model clearly outperforms the classification method used in [9]. On the same validation dataset, with class reduction (1564 labels) the \(F_{1}\)-score goes from 0.35 obtained with the model proposed in [9] to 0.55 with our proposal, i.e. an improvement of 57%.
With the raw codes (6161 labels), the \(F_{1}\)-score goes from 0.27 to 0.45, i.e. an improvement of 66.6%. The difference in scores with the results of PLM-ICD can be explained by the use of a context specific (medical) Transformers which has a vocabulary more adapted to the content of the documents.
## VI Conclusion
In this paper, we address the challenges of automatically associating ICD-10 codes to French clinical unstructured data. We have experimented several Transformers architectures to address the challenges of large input tokens and large numbers of labels. We therefore propose an ICD-10 association model that uses the latest advances in natural language processing and achieves the highest results in the French language to date. Our future work will focus on the use of Large Language Models and few-shots learning techniques to the ICD-10 classification.
## Acknowledgment
This work is (partially) supported by the EIPHI Graduate School (contract ANR-17-EURE-0002).
| ICDコードの自動付与は医療研究におけるよく知られた自然言語処理 (NLP)タスクです。近年、Transformerアーキテクチャに基づいた事前学習された言語モデルの出現により、NLPは大幅に進化してきました。特に、英語で。この論文では、これらのモデルを自動的にICDコードに関連付けるように適応させます。多くの入力トークンとラベルを予測する課題に対処するために、いくつかのニューラルネットワークアーキテクチャが実験されています。この論文では、最新のNLPの進歩とマルチラベル分類を組み合わせたモデルを提案します。このモデルを用いた臨床データセットのFairな実験の結果、フランス語で実施した実験では、私たちのアプローチが、従来の最先端の結果に比べてF1スコアを55%以上増加させました。 |
2304.13753 | Debris Rings from Extrasolar Irregular Satellites | Irregular satellites are the minor bodies found orbiting all four Solar
System giant planets, with large semi-major axes, eccentricities, and
inclinations. Previous studies have determined that the Solar System's
irregular satellites are extremely collisionally evolved populations today,
having lost $\sim$99 per cent of their initial mass over the course of hundreds
of Myr. Such an evolution implies that the irregular satellites must have
produced a population of dusty collisional debris in the past, which is
potentially observable due to the resulting reprocessing of stellar light. In
this paper we examine the signatures of the debris discs produced by extrasolar
analogues of this process. Radiation pressure, quantified by the parameter
$\beta$, is the driving force behind the liberation of dust grains from the
planetary Hill sphere, and results in the formation of circumstellar dust
rings, even in the absence of an underlying belt of asteroids in the system.
Our simulated discs reproduce many of the same features seen in some classes of
observed debris discs, such as thin ring morphology, a large blowout size, and
azimuthal symmetry. We compare our simulated discs' radial profiles to those of
the narrow dust rings observed around Fomalhaut and HR 4796A, and show that
they can broadly reproduce the observed radial distribution of dust. | Kevin T. Hayakawa, Bradley M. S. Hansen | 2023-04-26T18:00:04 | http://arxiv.org/abs/2304.13753v1 | # Debris Rings from Extrasolar Irregular Satellites
###### Abstract
Irregular satellites are the minor bodies found orbiting all four Solar System giant planets, with large semi-major axes, eccentricities, and inclinations. Previous studies have determined that the Solar System's irregular satellites are extremely collisionally evolved populations today, having lost \(\sim\)99 per cent of their initial mass over the course of hundreds of Myr. Such an evolution implies that the irregular satellites must have produced a population of dusty collisional debris in the past, which is potentially observable due to the resulting reprocessing of stellar light. In this paper we examine the signatures of the debris discs produced by extrasolar analogues of this process. Radiation pressure, quantified by the parameter \(\beta\), is the driving force behind the liberation of dust grains from the planetary Hill sphere, and results in the formation of circumstellar dust rings, even in the absence of an underlying belt of asteroids in the system. Our simulated discs reproduce many of the same features seen in some classes of observed debris discs, such as thin ring morphology, a large blowout size, and azimuthal symmetry. We compare our simulated discs' radial profiles to those of the narrow dust rings observed around Fomalhaut and HR 4796A, and show that they can broadly reproduce the observed radial distribution of dust.
keywords: stars: abundances - planets and satellites: formation - planets and satellites: dynamical evolution and stability - planets and satellites: gaseous planets - planets and satellites: composition
## 1 Introduction
Planet formation is a dynamic process, wherein the growth of planets is accomplished via a prolonged history of interactions between smaller bodies, leading to scattering and collision (e.g., Lissauer, 1993). This process is particularly important during the latter stages of planetary assembly, as planetary systems settle down into their final configurations. Indeed, the process of dynamical clearing is thought to continue for some time after planets have reached their final masses, as the remnants of the source population are ground down and removed from the system (e.g., Goldreich et al., 2004). Stars in this stage of development often show evidence for extended, tenuous, populations of dust (Wyatt, 2008). These dust grains scatter and re-radiate light from the central star, and can be observed either by looking for infrared excesses or by imaging in scattered light. The lifetime of dust in such systems is short, limited by radiation pressure and Poynting-Robertson drag (e.g., Burns et al., 1979), but the observation of this material offers essential insights into the architectures of newly formed planetary systems.
As a result, there have been substantial efforts to image such debris systems directly (see Hughes et al., 2018, for a recent summary). The results show a wide range of morphologies, from extended discs (e.g. those around \(\tau\) Ceti and HD 141569) to very narrow rings, such as those around the stars Fomalhaut, HR 4796A, and HD 141011. The variation in appearance presumably indicates some complexity in the evolution and outcome of the planetary assembly process, and there exist detailed models for the kinds of outcomes to expect (Wyatt, 2008; Krivov, 2010; Kenyon and Bromley, 2016; Lee and Chiang, 2016; Bonnefoy et al., 2021).
Debris discs are usually modelled with a source population as a belt of planetesimals undergoing collisional evolution, where the velocity dispersion is stirred either by the development of larger bodies within the belt, or as the result of perturbations from planets in the system (e.g., Wyatt, 2008). These are natural analogues of the Solar system dust generated either by collisions in the Asteroid belt or the Kuiper belt, although the extrasolar systems are much more massive.
However, there is a third Solar System small body population that is thought to have undergone substantial collisional evolution but has not yet been widely considered in the extrasolar context - namely the irregular satellites of the giant planets (Jewitt and Haghighipour, 2007). Evolutionary models of this population suggest that it could have been much larger in the past and could have generated a substantial population of dust during the course of losing \(\sim 99\) per cent of its original mass (Bottke et al., 2010). Indeed, such populations are thought to be an inevitable consequence of giant planet formation (Nesvorny et al., 2007) and the possible existence of irregular satellite clouds around extra-solar planets has recently been postulated to explain the curious properties of the exoplanet candidate Fomalhaut b (Kennedy and Wyatt, 2011; Kenyon and Bromley, 2016). These papers have focussed on the production of dust near the planet, but radiation pressure forces will cause much of the dust to spiral outwards into a circumstellar ring, and can therefore also contribute to the observed extended structures observed around many stars.
Therefore, our goal in this paper is to examine the kinds of debris
disc signatures one might expect from a source population initially confined to a planetary Hill sphere, and to explore their similarities and differences with those that result from more traditional population assumptions. An alternative to the traditional planetesimal disc model is particularly attractive in the case of the thinnest debris rings, such as those around Fomalhaut and HR 4796A, where the narrowness requires additional hypotheses such as shepherd satellites (Boley et al., 2012), instabilities involving a gaseous component (Lyra & Kuchner, 2013) or recent collisions (Olofsson et al., 2019). We will demonstrate that irregular satellite clouds naturally give rise to narrow rings and may therefore offer a more natural explanation for these structures, in the sense that the confinement of the original planetesimal population is due to the gravitational influence of the planet.
The outline of this paper is as follows. In SS 2 we describe the dynamics of radiation pressure-driven dust in the reduced three-body problem, and examine the debris disc geometry that results if the source population of the dust is initially restricted to a planetary Hill sphere. In SS 3 we then introduce a model for a source population of dust which we combine with the dynamical model to build a model of a candidate debris disc so that we may explore the observational implications of this hypothesis. We then compare these features to the present state of the art observations of the two most prominent thin ring systems - Fomalhaut and HR 4796A - in SS 5.
## 2 Dynamics of dust generated in an irregular satellite swarm
The scattering and absorption/re-emission of light in a debris disc is the action of dust particles in orbit about the star. In a traditional debris disc model, this dust is released in collisions between larger bodies in heliocentric orbits, and so reflects the heliocentric distribution of the parent bodies. Here we wish to examine the consequences when the dust is released by collisions between bodies that are localised in orbit around a planet. In addition to the radiation pressure force from the central star, their dynamics is also regulated by the gravitational influence of the planet.
### Single dust grain dynamics in the restricted three body problem with radiation pressure
Dust particles have infinitesimal mass, so their dynamics can be treated accurately within the paradigm of the restricted three-body problem, sketched out in Fig. 1(Murray & Dermott, 1999). However, the stream of photons emanating from the central star is absorbed or scattered by the dust grains, and exerts a radiation pressure. This means that small particles experience a non-negligible additional radial force, which reduces the effective gravity of the central object (Schuerman, 1980) and fundamentally alters the geometry of the pseudo-potential that regulates the dynamics. We can relate this purely radial radiation pressure force to the stellar gravitational force using the formalism of Burns et al. (1979) as follows,
\[F=-\frac{G(1-\beta)M_{1}m}{r_{13}^{2}}\dot{r}_{13}-\frac{GM_{2}m}{r_{23}^{2}} \dot{r}_{23}, \tag{1}\]
where \(G\) is the Newtonian gravitational constant, \(\beta=|\boldsymbol{F}_{\rm rad}|/|\boldsymbol{F}_{\rm grav}|\) is the relative strength of radiation pressure compared to stellar gravity, \(M_{*}\) is the stellar mass, \(m\) is the dust grain mass, \(r_{13}\) is the distance from the grain to the star, \(r_{23}\) is the distance from the grain to the planet, \(\dot{\boldsymbol{r}}_{13}\) is the radial unit vector away from the star, and \(\dot{\boldsymbol{r}}_{23}\) is the radial unit vector away from the planet. For \(\beta>0\), the dust grains behave as if they'see' a star of reduced mass \((1-\beta)M_{*}\).
The parameter \(\beta\) reflects the strength of the radiation pressure, and can be more precisely quantified as
\[\beta=\frac{3L_{*}(Q_{\rm rad})}{8\pi GM_{*}c\rho D}, \tag{2}\]
where \(L_{*}\) is the stellar luminosity, \(\langle Q_{\rm rad}\rangle\) is the wavelength-averaged radiation pressure coefficient, \(c\) is the speed of light, \(\rho\) is the mass density of grains, and \(D\) is the diameter of the grain. This \(\beta\) can be thought of as a proxy for grain size \(D\) if we assume a constant mass density \(\rho\) among dust grains, since \(\langle Q_{\rm rad}\rangle\) is of order unity. Koschny & Grun (2001) performed laboratory experiments involving collisions between silicates and found the mass density of resulting grains to be \(\rho=2.8\) g cm\({}^{-3}\). We assume a value for \(\langle Q_{\rm rad}\rangle\sim 0.5\) as a rough average from Liu & Schmidt (2018). \(\beta\) can thus be evaluated as
\[\beta\approx 0.206\left(\frac{D}{1\ \mu{\rm m}}\right)^{-1}\left(\frac{L_{*}} {\rm L_{\odot}}\right)\left(\frac{M_{*}}{\rm M_{\odot}}\right)^{-1}, \tag{3}\]
where we have assumed typical values of luminosity and mass of a G-type star such as the Sun.
The dynamics of the test particle in the co-rotating frame of the reduced three-body problem is governed by a pseudo-potential that accounts for both the gravity of the two massive bodies and the centrifugal force (Murray & Dermott, 1999). The pseudo-potential defines a set of 'zero-velocity curves' which restrict the motion of a test particle, depending on its initial conditions.
The fact that the radiation pressure only scales the effective mass of the central star means that the same formalism applies here, but the revision of the coefficients in the pseudo-potential results in an important qualitative difference. Although the direct gravitational force felt by the dust is reduced by the radiation pressure, the orbital velocity of the planet is not similarly affected, and so the relative contributions of the three different components of the pseudo-potential change with \(\beta\). In particular, at fixed mass, there is a critical \(\beta\) above which the L\({}_{2}\) point becomes the global potential minimum (instead of L\({}_{1}\), as in the \(\beta=0\) case). This distinction is important because it is
Figure 1: Schematic of the restricted three-body problem. The x- and y-axes make up the corotating centre-of-mass frame, rotating at an angular velocity \(n\), centered at point \(O\). The x’- and y’-axes make up the inertial centre-of-mass frame. \(M_{1}\) and \(M_{2}\) are massive bodies with the third body located at point \(P\).
this minimum that decides the direction in which dust, grinding down in a collisional cascade, leaves the Hill sphere and enters heliocentric orbit.
To illustrate the effects of this change in geometry, let us consider two different physical scenarios: one where radiation pressure is not important (i.e., turned 'off'; \(\beta=0\)) as in panel (a) of Fig. 2 and another where radiation pressure is important (i.e., turned 'on') as in panel (b) of Fig. 2. Without loss of generality, we take \(\beta=0.1\) for the radiation pressure scenario. If we wish to derive the minimum velocity of escape, we see that, in panel (a), particles will more readily escape through the L\({}_{1}\) Lagrange point than L\({}_{2}\). This behavior is well studied, such as in the case of Roche lobe overflow where mass transfer can occur between two bodies in a binary system. However, when radiation pressure is non-negligible, we see in panel (b) that the lowest velocity particles to escape now overflow L\({}_{2}\). Thus, the addition of radiation pressure into our equations of motion changes the physics from accretion on to the star to ejection of material outside the orbit of the planet. This is a consequence of the weakened effective gravity of the central star, which shifts its contribution to the pseudo-potential inwards and changes the relative heights of the L\({}_{1}\) and L\({}_{2}\) points. In Appendix B we review more thoroughly how this change in topology occurs as \(\beta\) is changed.
There are essentially three fates for a dust grain: (i) accretion on to the planet, (ii) accretion on to the star, or (iii) escaping to infinity. Depending on the initial conditions, the dust grain may simply spiral into the planet after an irregular satellite collision, coating the planet, and any satellites, with dark dust and affecting its albedo (Burns et al., 1979; Bottke et al., 2010). Accretion on to the star may occur as the result of another radiation-related process: Poynting-Robertson (PR) drag. This is a consequence of the loss of angular momentum due to reradiation of absorbed energy by the dust, but is not taken into account here because it occurs on a longer time-scale than the direct dynamical effects of radiation pressure, and generally affects large particles more. We will ignore both PR drag and circumstellar collisions between dust grains in our simulations because their respective time-scales are longer than the ejection time-scales of individual dust grains from the system, as shown in Fig. 3. Detailed calculations are performed in Appendix A.
For our purposes, the most important outcome is escape from the Hill sphere through the L\({}_{2}\) point. Although the eventual outcome is escape to infinity, orbital integrations show that many trajectories spend multiple orbital periods in the vicinity of the outer edge of the relevant zero-velocity curve, before eventually spiralling outwards. This extended period of quasi-regular orbital behaviour thus gives rise to the observational appearance of a thin ring, allied with an exponential tail of orbits that extend out much farther. Such a configuration bears a qualitative resemblance to the 'birth ring + halo' model of many debris systems and we will examine its observational consequences below.
### Sample orbital integrations
To better understand how this change in geometry reflects itself in orbital behaviour, let us examine the behaviour of a few representative test particles before building a large ensemble population. We perform our numerical integrations using the Mercury N-body integrator (Chambers, 1999). We start with the simplest case of \(\beta=0\), which represents a particle that is too large for radiation pressure to have an appreciable effect. Each particle originates in the Hill sphere of its parent planet, and receives a 3-D velocity vector of magnitude \(v_{\rm i}\), whose direction is oriented randomly. (We also investigated the effects of preferring prograde or retrograde orbits for our initial conditions, but found no significant change in the results compared to randomly oriented orbits.) After a few orbits around the planet many grains slip through the L\({}_{1}\) Lagrange point and 'bounce' along the inner edge zero-velocity curve. After several excursions around the star, the grain returns to the Hill sphere through the L\({}_{1}\) orbit in a messy, rapidly precessing orbit. In the absence of a dissipative mechanism, this behavior basically repeats itself over time, with the grain being gravitationally shared by the star and the planet. On longer time-scales, Poynting-Robertson drag will eventually decouple the particle from the Hill sphere and it will spiral into the star.
Next we examine the case of radiation pressure turned 'on' with an intermediate strength of \(\beta=0.1\). The path of such a particle is shown in Fig. 4, for the case of a Jupiter mass planet on a circular orbit of semi-major axis 5 au. In this case, the particle spirals outwards - rather than inwards - and makes several cardiod-shaped excursions, roughly several planetary semi-major axes in size, as shown in panel (a). This is a consequence of the aforementioned change in the geometry of the pseudopotential, as shown in panel (b). Like in the \(\beta=0\) case, the grain will occasionally come to a sharp halt along the predicted zero-velocity curve. However, the fundamental alteration of the forbidden zone, caused by the addition of radiation pressure, means that the 'collision' occurs with the outer edge of the zero-velocity curve, not the inner one. In panel (c), we see that after a moderate number of dynamical time-scales, this behavior essentially repeats itself since the orbits all stay within \(\sim\)15 au. However, in panel (d), after a large number of dynamical time-scales, we see that the eccentricity of the grain has been pumped up dramatically, reaching an apoapsis up to \(\sim\)75 au, until it is effectively ejected from the system.
These sample integrations illustrate that the dynamics of particles released from the planetary Hill sphere under the influence of radiation pressure can reproduce the basic birth ring configuration of debris disc models, even without the underlying birth ring of planetesimals. We wish now to expand this into a proper model for debris discs. This means we need a more detailed source model, which will link the properties of the dust to the new underlying planetesimal population - the irregular satellite population. This is the focus of SS 3.
Figure 2: _Panel (a):_ forbidden zone (in blue) for a Jacobi constant of \(C_{J}=3.038\) when radiation pressure is not included (\(\beta=0\)). _Panel (b):_ forbidden zone (in blue) for a Jacobi constant of \(C_{J}=3.824\) when radiation pressure is taken into account (\(\beta=0.1\)). The orange circle (not to scale) represents the location of the giant planet. In panel (a), we note that the Hill spheres of the star and planet overlap at the L\({}_{1}\) Lagrange point. Thus, dust grains originating in the planetary Hill sphere are permitted to escape into a circumstellar orbit. In contrast to panel (a), we note that in panel (b), there is now an opening at the L\({}_{2}\) Lagrange point for the dust grains to escape from.
### Forbidden zone thickness as a function of radiation pressure
An interesting question is to ask how the results of our simulations will change depending on planet mass, since the study of exoplanets around nearby stars has discovered a great variety in planetary properties. The solutions to the restricted three-body problem depend not just on the mass of the secondary body (the planet), but specifically on the mass ratio of the secondary body to the primary body (\(\mu=M_{1}/M_{2}\)). Thus, a Saturn-like planet orbiting an M dwarf may have similar dynamics to those of a Jupiter-like planet orbiting a Sun-like star, if both systems have a mass ratio of \(\mu\sim 0.001\).
Increasing radiation pressure generally has the effect of increasing the forbidden zone thickness as seen in the first column of Fig. 5. However, while the overall thickness increases, we can see that the radii of both the inner and outer edge of the forbidden actually decrease. In other words, the radius of the outer edge is decreasing by a smaller amount than that of the inner edge.
For a more comprehensive look at how the forbidden zone changes as a function of both radiation pressure strength and mass ratio, interested parties may refer to the discussion in Appendix C.
## 3 Generation of dust in an irregular satellite swarm
We assume that the particles whose orbits we track originate from collisions between irregular satellites orbiting around the giant planet. Irregular satellites revolve around their parent planet at relatively large distances compared to other moons (e.g., Bottike et al., 2010), so it is natural to characterize their distances in units of Hill radii, given by \(R_{\rm H}=a_{\rm p}[M_{\rm p}/(3M_{*})]^{1/3}\), where \(a_{\rm p}\) is the planetary semi-major axis and \(M_{\rm p}\) is the planetary mass. The original orbits of irregular satellites are believed to be roughly isotropically distributed (Jewitt & Haghighipour, 2007), so we investigate both prograde and retrograde orbits around the parent planet, which are typically found at \(r_{23}/R_{\rm H}\) values of \(\sim 0.1\) to \(0.5\), where \(r_{23}\) is the distance between the planet and the dust grain. Thus, we use those upper and lower limits to randomly generate starting locations for the dust grains.
Figure 4: Orbital trajectory of a single \(1~{}\mu\)m dust grain (in blue) overplotted on its respective \(\beta=0.1\) zero-velocity curves (in black). The orbits are shown at various evolutionary stages: \(t=1e2\) yr in panels (a)–(b), \(t=1e4\) yr in panel (c), and \(t=1e5\) yr in panel (d). Panel (b) represents a zoom-in of panel (a).
Figure 3: PR drag, collisional, and ejection time-scales of circumstellar dust grains as a function of \(\beta\), for two canonical examples discussed in this paper. _Left:_ For a Jupiter-mass planet orbiting at \(5.2\) au with \(10^{-3}~{}M_{L}\) of irregular satellites. _Right:_ For a Jupiter-mass planet orbiting at \(140\) au with \(1~{}M_{L}\) of irregular satellites. In this paper, we ignore PR drag and circumstellar grain collisions due to their larger time-scales. Collisions cannot always be ignored. We only ignore collisions in this paper because in the situations studied here, the ejection time-scale dominates. Detailed calculations are performed in Appendix A.
We divide the discussion of initial velocities into magnitude and direction. We take the magnitude of the velocity to be 71 per cent of the Keplerian circular velocity at the debris's respective distance from the planet. Since it is a spherically symmetric cloud, we assume that the direction of the dust grain's velocity unit vector is random. Specifically, in polar coordinates \(\theta\) and \(\phi\), \(\cos(\theta)\) is distributed uniformly in [-1,1) and \(\phi\) is distributed uniformly in [0,2\(\pi\)). We find no significant difference between the qualitative results for orbits that are initially prograde or retrograde. Since the Keplerian velocity is given by \(v_{\rm kep}=(GM_{\rm p}/r_{23})^{1/2}\), the initial velocities are given by
\[v_{\rm kep}=(2.248~{}{\rm km~{}s^{-1}})\left(\frac{M_{\rm p}}{\rm M_{\rm J}} \right)^{1/2}\left(\frac{r_{23}}{0.5~{}R_{\rm H}}\right)^{-1/2} \tag{4}\]
where \(\rm M_{\rm J}\) is the mass of Jupiter.
### Rate of dust generation
A population of irregular satellites will generate a collisional cascade, in which planetesimals are ground down to micron-sized dust grains. Collisions between the largest collisionally coupled bodies of size \(D_{\rm max}\) initiate the cascade, creating numerous medium-sized bodies that further collide with each other to produce successively smaller bodies. In the traditional context of a circumstellar debris disc, the smallest collisionally coupled body's size \(D_{\rm min}\) is determined by the strength of the central star's radiation pressure, and tends to be about 1 micron. This is often referred to as the blowout size. Dohnanyi (1969) found that a self-similar, steady-state cascade follows a power law differential size distribution governed by
\[\frac{dN}{dD}\propto D^{-q}, \tag{5}\]
where \(D\) is the spherically symmetric grain's diameter and \(q\approx 3.5\). A dust grain is no longer in a bound orbit around the star when the ratio of radiation pressure force to gravitational force is greater than 0.5 (e.g., Pawellek & Krivov, 2015), i.e.,
\[\beta\equiv\frac{F_{\rm rad}}{F_{\rm grav}}\geq 0.5. \tag{6}\]
In the case discussed here, there is an additional consideration. Fragments from irregular satellite collisions will continue to participate in the collisional cascade as long as they orbit within the planetary Hill sphere. However, once the radiation pressure is strong
Figure 5: Zero-velocity curves for both the classic restricted three-body problem (\(\beta=0\)) and with moderate radiation pressure included (\(\beta=0.1\)), as representative examples for a mixed mass ratio of \(M_{2}/M_{1}=0.001\) and initial velocity \(v_{\rm i}\) equal to 71 per cent of the local circumplanetary Keplerian velocity \(v_{\rm kep}\). The left and right columns represent zoomed in versions of the respective red squares shown in the middle column. The top row shows the contours for the minimum Jacobi constant that arise from the initial conditions while the bottom row shows the contours for the maximum Jacobi constant. Thus, all possible Jacobi constants fall between these two extrema. Since the Jacobi constant only depends on the square of the velocity, all possible directions of velocity have implicitly been considered as well as all isotropic orbital configurations. The main two findings here are that the forbidden zone thickness increases with increasing radiation pressure, but the radii of both the inner and outer edges of the forbidden zone shrink with increasing radiation pressure, as seen in the left column. In other words, the radius of the outer edge of the forbidden zone shrinks less than that of the inner edge.
enough for the particle to escape the Hill sphere, the density of collision targets drops dramatically and the collisional time-scale becomes large. Thus, the minimum particle size in the cascade is set by the size for which the residence time within the Hill sphere equals the characteristic collision time for particles of that size. The residence time here is defined as the amount of time a dust grain spends in the Hill sphere at a given \(\beta\) before escaping. Conversely, this also sets a minimum \(\beta\) for the particles in the more extended debris disc and thus regulates their size.
We can find the collisional time-scale \(t_{\rm coll}\) for any member of the collisional cascade from \(t_{\rm coll}=1/(n\alpha v_{\rm rel})\), where \(n\) is the number density of particles that cause catastrophic collisions, \(\sigma\) is the cross section, and \(v_{\rm rel}\) is the relative velocity between impactors. The number density of particles is given by \(n=N/V\), where \(N\) is the number of particles and \(V\) is the volume they occupy. We calculate \(N\) by integrating Eq. 5 from \(D/2\) to \(2D\), the range by which we define sizes that are capable of a catastrophic collision. Additionally, we normalize Eq. 5 by integrating the collisional cascade over mass via
\[M_{\rm tot}=\int_{D_{\rm min}}^{D_{\rm max}}m\frac{dN}{dD}dD \tag{7}\]
where \(m=(4\pi/3)(D/2)^{3}p\) is the mass of a body in the cascade. Since the irregular satellites are distributed isotropically in a spherical cloud, we take this volume to be some spherical shell with the radius the fraction \(f_{\rm tot}\) of the Hill sphere they occupy: \(V=f_{\rm tot}V_{H}=f_{\rm tot}(4\pi/3)R_{\rm H}^{3}\). Gravitational focusing is not important for submillimeter-sized particles, the cross section is just the geometric cross section: \(\sigma=\pi(D/2)^{2}\). Lastly, we take the relative velocity \(v_{\rm rel}\) to be of order the circumplanetary Keplerian velocity \(v_{\rm kep}\) since the orbital inclinations of irregular satellites are randomly oriented. Putting everything together, we find that the collisional time-scale is
\[\begin{split} t_{\rm coll}&=(726\;{\rm yr})\;\left( \frac{\beta}{0.1}\right)^{-1/2}\left(\frac{M_{\rm tot}}{10^{-2}M_{\rm L}}\right) ^{-1}\left(\frac{\rho}{1\;{\rm g\;cm^{-3}}}\right)^{1/2}\\ &\quad\times\left(\frac{D_{\rm max}}{30\;{\rm km}}\right)^{1/2} \left(\frac{M_{\rm s}}{{\rm M}_{\odot}}\right)^{-5/3}\left(\frac{M_{\rm p}}{{ \rm M}_{\rm J}}\right)^{-2/3}\left(\frac{L_{\star}}{{\rm L}_{\odot}}\right)^{1 /2}\\ &\quad\times\left(\frac{(Q_{\rm rad})}{0.5}\right)^{1/2}\left( \frac{a}{{\rm a}_{\rm J}}\right)^{7/2}\left(\frac{f}{0.4}\right)^{1/2}\left( \frac{f_{\rm tot}}{0.098}\right),\end{split} \tag{8}\]
where \(M_{\rm tot}\) is the total mass of the collisional cascade, \({\rm M}_{\rm L}\) is lunar masses, \(a\) is the semi-major axis of Jupiter, \(f\) is the orbital radius of the body as a fraction of the Hill radius (\(f=r_{23}/R_{\rm H}\)). As long as the residence time \(t_{\rm res}\) is larger than the collisional time-scale \(t_{\rm coll}\), we expect the grains to continue to grind down to smaller sizes, which increases \(\beta\) and shortens the residence time-scale. We empirically measure the residence time from our simulations, specifically defining \(t_{\rm res}\) at a given \(\beta\) to be the amount of time it takes for 50 per cent of the dust grains to exit the Hill sphere. A particle is defined as having exited the Hill sphere if the planetocentric distance \(r_{23}>R_{\rm H}\).
Fig. 6 shows the comparison of characteristic collisional and residence time-scales for the dust generated in irregular satellite collisions for a Jupiter-like planet orbiting a Sun-like star when \(M_{\rm tot}=1\;M_{\rm L}\) and \(\rho=3\;{\rm g/cm^{3}}\), with the other parameters being described by Eq. 8. At low \(\beta\) (large particles), the residence time within the Hill sphere is long, because radiation pressure is weak, but as \(\beta\) increases, the residence time falls sharply as the radiation pressure accelerates theodus. Although the collision time also gets shorter with decreasing size, the dependence is flatter. The net result is that the cascade to smaller size is truncated when \(\beta\) is large enough that the particles exit the Hill sphere before undergoing any more collisions. We identify the critical \(\beta=0.18\) from the intersection of the \(t_{\rm res}\) and \(t_{\rm coll}\) curves. This critical \(\beta\) represents the largest size dust grain that could escape from the Hill sphere. In principle, the size of an escaping particle could be lower or higher, depending on the parameters assumed in Equation 8 to calculate the collisional time-scale. While it's possible for the residence time (blue curve in Fig. 6) and collisional time-scale (green line) to never intersect depending on the assumed parameters, particles could still escape from the Hill sphere. A short collisional time-scale just means that particles will grind down all the way to the classical "blow-out" size corresponding to approximately \(\beta=0.5\) before getting ejected out of the entire system (Krivov, 2010). We note here that the classical blowout size of \(\beta=0.5\) only applies to a dust grain on a purely circumstellar orbit, and should only be used as a general benchmark for systems like ours where there is a planet involved.
### Source Model
Our integrations yield a large number of trajectories as a function of time, for different \(\beta\). A realistic model assumes that individual dust grains are generated at a constant rate due to the collisional cascade initiated by the irregular satellite population. In practice, we achieve this continuous generation by offsetting the initial time of individual grain trajectories from our library of integrations. Working within the co-rotating frame ensures that new dust grains will always be produced in the Hill sphere of the planet. We then track the dynamical evolution of these dust grains to calculate density profiles of the dust population, as described in the next section.
Figure 6: Characteristic dust grain residence time in the Hill sphere (in blue) – derived from numerical integration of orbits – and catastrophic collisional time-scale (in green) – from equation (8), both as functions of \(\beta\) when \(M_{\rm tot}=1\;M_{\rm L}\) and \(\rho=3\;{\rm g/cm^{3}}\), with the other parameters being described by Eq. 8. The intersections of the curves at approximately \(\beta=0.18\) represents the critical size at which a dust grain can decouple from the collisional cascade and be expelled from the Hill sphere.
## 4 Results
The irregular satellite source model for generating dust discs is fundamentally different from the traditional planetesimal disc source population in that it has a localised source region, confined within the planetary Hill sphere, from which the material spreads out slowly. In this section we wish to therefore characterise the observational appearance of the resulting dust population, which we term an ISDD (Irregular Satellite Debris Disc).
### From circumplanetary to circumstellar orbits
Fig. 7 shows a snapshot of the positions of \(N=750,000\) particles with \(\beta=0.1\), integrated for \(10^{3}\) planetary dynamical times (\(\sim\)12,000 yr). This simulation is for a Jupiter-mass planet on a 5.2 au orbit around a solar-mass star. This simulation duration was chosen because it is the time-scale over which a steady state is reached for the shape of the radial profile of the disc. The distribution of dust grains can be divided into circumstellar material and circumplanetary material. The circumstellar material is made up of dust grains that were able to escape from the Hill sphere whereas circumplanetary material represents grains that are still trapped in the Hill sphere. Whether or not a dust grain successfully escapes from the Hill sphere is primarily determined by initial conditions. The role of the zero-velocity curves from the Jacobi formalism in shaping the distribution of the escaped material is clear. Aspects of this dust population, such as azimuthal symmetry, radial profile, ring thickness, and vertical profile will be examined in the following subsections.
It is important to note that the overdensity within the Hill sphere in Fig. 7 is artificial because those particles, that do not escape, will be subjected to continued collisional evolution not included in this simulation, and the particles are shown here primarily to illustrate the nature of the dust trajectories and their evolution. As we previously saw in Fig. 6, the intersection of the residence time and collisional time-scale occurs at a very short time-scale (<100 yr). This is very short compared to the duration of the simulation (\(\sim\)12,000 yr), so we expect trapped dust grains to still be coupled to the original cascade, grinding down to even smaller sizes until they too are blown out of the Hill sphere. As a result, we would not expect to see a pile-up of material in the form of a circumplanetary disc.
### ISDDs are azimuthally symmetric
Although the dust is initially generated within the Hill sphere, once it escapes, azimuthal symmetry in an ISDD is achieved very quickly. Fig. 8 shows the azimuthal profile of the dust after 100 dynamical time-scales, separated into 20-degree bins for the case of \(\beta=0.15\). We have intentionally excluded material that remains in the planetary Hill sphere so that the global circumstellar disc features could be examined.
We quantify the baseline fluctuations in the azimuthal profile by comparing the mean value to the standard deviation. We conclude for four representative values of \(\beta\) in the range \(0.15\leq\beta\leq 0.30\) that the ISDDs are azimuthally symmetric since the standard deviations are small compared to the mean. For example, in the case of \(\beta=0.15\), the average number of dust grains per bin is 185 while the standard deviation is 15, so we conclude that the variations are small. Similar results are obtained for other values of \(\beta\). Dust rings generated in this manner nevertheless retain the appearance of azimuthal symmetry.
Another way to evaluate the azimuthal symmetry of the disc is to calculate the average dust grain semi-major axis as a function of azimuthal angle \(\theta\). In Fig. 8, we calculate both the mean and the median radius for the \(\beta=0.15\) disc. We find that the mean radius (in blue) is approximately 50 au and that the median radius (in green) is approximately 25 au. It is not surprising to see that the mean radius is higher than the median radius. While the vast majority of dust grains spend their time close to the planet's orbit bouncing around the edges of the Jacobi contours, a small fraction of dust grains will slowly diffuse out of the system due to the influence of radiation pressure and stirring from the planet, biasing the mean to higher radii.
### ISDDs exhibit thin ring morphology
Astronomers commonly quantify ring thickness as the ratio of ring width to ring radius \(\Delta R/R\). Specifically, a ring may be characterized as 'thin' if \(\Delta R/R<0.5\)(Hughes et al., 2018). This definition takes into consideration the great diversity of size scales that debris discs are observed to span and allows us to compare systems on large and small absolute scales. However, the ratio requires us to specify how we define \(\Delta R\) and \(R\). We first fit a function to the distribution and find that several of the parameters naturally characterize the ring width and ring radius.
Specifically, we fit a piecewise function to the radial profile that contains three physically motivated regimes. Region I (\(r_{13}<r_{\rm A}\)) is simply a one-sided Gaussian that describes the sharp inner edge of the ring. This feature is to be expected since the forbidden zones from the Jacobi contours prevent dust grains from wandering any closer to the star than one planetary semi-major axis plus one Hill radius (\(a+R_{\rm H}\)). Region II (\(r_{\rm A}<r_{\rm 13}<r_{\rm B}\)) is an exponential decay function that describes the initial drop off in surface density that occurs as we move outward away from the peak. The peak tends to be just outside of the Jacobi contours because the dust grains spend a lot of time bouncing around the edges of the zero-velocity curves before they diffuse out of the system. Lastly, Region III (\(r_{\rm 13}>r_{\rm B}\)) is a continuation of Region II with an added exponential term to soften the drop-off and match the more gradual decay of the outer edge. This feature is also expected since we are investigating moderate radiation pressure strengths (\(\beta=0.15-0.30\)) that are strong enough to perturb the dust grains from a circumplanetary orbit to a circumstellar orbit, but not strong enough to immediately eject the grains from the system. The gradual tail of the radial distribution represents dust grains that are in the process of slowly diffusing out of the system. A sample fitted radial profile for \(\beta=0.15\) is shown in Fig. 9. The resulting functional form is
\[N(r_{13})=\left\{\begin{array}{ll}\frac{N_{0}}{r_{13}}\exp \left(-\frac{(r_{13}-r_{\rm A})^{2}}{2\sigma_{1}^{2}}\right),&r_{13}\leq r_{ \rm A}\\ \frac{N_{0}}{r_{13}}\exp\left(-\frac{(r_{13}-r_{\rm A})}{\sigma_{2}} \right),&r_{\rm A}<r_{13}<r_{\rm B}\\ \frac{N_{0}}{r_{13}}\exp\left(-\frac{(r_{13}-r_{\rm A})}{\sigma_{2}} \right)\\ +\frac{N_{13}}{r_{13}}\left[1-\exp\left(-\frac{(r_{13}-r_{\rm B})}{\sigma_{3}} \right)\right],&r_{13}>r_{\rm B}\end{array}\right. \tag{9}\]
where \(N_{0}\) and \(N_{1}\) are normalization constants for their respective terms, \(r_{\rm A}\) is the peak of the distribution, \(r_{\rm B}\) is the transition point between Regime II and Regime III, \(\sigma_{1}\) is the standard deviation of the single-sided Gaussian, and \(\sigma_{2}\) and \(\sigma_{3}\) are the characteristic lengths of their respective exponential decay terms.
In order to cast this in the observational variables \(\Delta R\) and \(R\), we take \(R\equiv r_{\rm A}\) since it naturally describes the peak of the distribution,
and \(\Delta R\equiv\sigma_{1}+\sigma_{2}\) since those lengths each characterize the drop-off in either direction away from the peak. Thus, in terms of the function parameters, \(\Delta R/R\equiv(\sigma_{1}+\sigma_{2})/r_{A}\).
We also apply the fit to multiple homogeneous discs of different values of \(\beta\) (\(\beta=0.15-0.30\)). The same piecewise function was fitted to all simulations and for each value of \(\beta\), the piecewise function did a good job of smoothly connecting the surface density profile and defining a reasonable ring width. We measure their normalized ring widths and determined the uncertainty by marginalizing out the seven parameters in the function described above in favor of \(\Delta R/R\). A 3\(\sigma\) confidence interval was used to determine the upper and lower limits of the ensuing error bars. Those results are summarized in Fig. 10. The general trend is that the thickness of the ring grows with increasing \(\beta\). This occurs because higher values of \(\beta\) correspond to stronger radiation pressure. This stronger pressure is able to more efficiently push dust grains into larger and more eccentric orbits.
### An exponential tail
As mentioned in the previous section, the radial distribution has a gently sloped exponential tail. This tail is a generic feature of all models that take radiation pressure into account. However, our model generate larger particles than a standard source model since the collisional cascade is truncated by the Hill sphere residence time, as shown in Fig. 6. The slope of the fiducial \(\beta=0.15\) ISDD is characterized in Fig. 11. While the data plotted is from 0 to 100 au, we calculated the slope only for the portion between 20 and 100 au. The characteristic length came out to be 20.0 au, several times the planet semi-major axis, indicating a relatively slow drop-off.
### ISDDs exhibit a toroidal shape
We examined the vertical structure of the simulated disc in addition to the radial structure. Specifically, we plotted the dust grain abundance as a function of height \(z\) above or below the midplane of the disc, as seen in Fig. 12. In addition, we fit a standard Gaussian function to the vertical distribution since the standard deviation would naturally translate to a scale height. We find that the scale height of \(H=0.78\) au for the \(\beta=0.1\) toy model is comparable to both the ring thickness (\(\Delta R=0.64\) au) and Hill radius (\(R_{\rm H}=0.35\) au).
We attribute the sharp inner edge of the torus to the forbidden zone predicted by the Jacobi constant. Recall from Fig. 2 that particles of a certain Jacobi constant are not allowed to exist in certain spaces in the restricted three-body problem. For physically relevant values of \(\beta\), this region takes the shape of an annulus along the orbit of the planet, approximately one Hill diameter in width. The inner edge of our ISDD represents dust grains bouncing around the edges of the forbidden zone.
## 5 Comparison to observations
One motivation for this study is the presence of narrow ring-like structures in some observed debris disc systems. The narrowness of the images implies a mechanism for confining either the dust or its parent population. In our model, this is a consequence of the orbital evolution regulated by the Jacobi constant, which sets a hard inner edge on the distribution. The outer profile is more gradual as the dust spirals out, and we compare here this theoretical expectation to properties of the best quantified observed systems.
Figure 7: Position of test particles moving under the influence of radiation pressure for a value of \(\beta=0.15\) after 1,000 dynamical time-scales (\(\sim\)12,000 years) of integration. This simulation is for a Jupiter-mass planet on a 5.2 au orbit around a solar-mass star. This simulation duration was chosen because it is the time-scale over which a steady state is reached for the shape of the radial profile of the disc. Once a steady state is reached in the disc, the shape of the radial profile remains the same, but the amplitude decreases. The zero-velocity curves for \(C_{J}=2.824\) are plotted in blue. The location of the star and planet are denoted by a yellow star and orange dot respectively. The right panel shows an face-on view of the disc as a whole. The left panel zooms in on the vicinity of the planet, showing an overdensity of material in the Hill sphere. These are particles whose trajectories remain bound over the course of the simulation. In reality, these particles have orbited the planet for hundreds of collisional times and will have been ground down to much smaller sizes. Therefore, it is important to note that the overdensity within the Hill sphere is not physical, and is shown here primarily to illustrate the nature of the dust trajectories and their evolution. When viewed as a synthetic image, the panel on the right should have the circumplanetary excess reduced by a factor of several hundred, at least.
### Fomalhaut
The existence of a circumstellar disc around the 440 Myr old A3V star Fomalhaut has been known for a long time due to the infrared excess in its spectrum. Fomalhaut is one of the best-studied debris discs, due to its distance from Earth being only 7.7 pc and the fact that it is one of the closest systems that is not edge on. The Fomalhaut debris disc was first directly imaged by Kalas et al. (2005) using the Hubble Space Telescope, revealing a sharp inner edge and the central star being offset from the disc geometric centre. They derived a flux profile for Fomalhaut by fitting a scattered light model of an inclined, optically thin, belt of dust to observational data, as shown in Fig. 13. The best-fitting value for the power law profile describing the inner edge of the belt is proportional to \(r^{1.09}\) whereas that of the outer belt scales with \(r^{-4.6}\). Similarly, when an exponential profile is used, the inner edge is proportional to \(\exp(0.08r)\) while the outer edge scales with \(\exp(-0.03r)\), where \(r\) is in au. They define the inner edge as 120-140 au and the outer edge as 140-158 au.
Since Kalas et al. (2005) did not explicitly measure a ring width for Fomalhaut, we extrapolate one from their distribution by defining Fomalhaut's normalized ring width to be equal to its full width at half maximum (FWHM) divided by the peak. Under this definition, Fomalhaut's flux profile has a ring width of \(\Delta R/R=0.191\), which fits the definition from Hughes et al. (2018) of a 'narrow' ring as one having a normalized ring width \(\Delta R/R<0.5\).
In order to place the Kalas observations within our paradigm, we fit our function to observations of the radial distribution of the data for Fomalhaut obtained by (Kalas et al., 2005). In Fig. 13, the red
Figure 8: _Top:_ Histogram of dust grain distribution as a function of azimuthal angle \(\theta\) for \(\beta=0.15\). The spike located at \(\theta=0^{\circ}\) has been subtracted off, since leftover material in the planetary Hill sphere is not part of the circumstellar disc. The average number of dust grains per bin is 185 with a standard deviation of only 15 grains, denoted by the horizontal blue line and blue shaded region. _Bottom:_ Mean and median radius as a function of azimuthal angle \(\theta\). The mean radius is approximately 50 au at nearly all angles while the median radius is approximately 25 au at all angles, showing that the disc is very azimuthally symmetric.
Figure 10: Normalized ring widths (\(\Delta R/R\)) for various values of \(\beta\) for a system with a 1 M\({}_{\odot}\) star and a \(10^{-3}\) M\({}_{\odot}\) planet. The general trend points to broader ring widths and therefore shallower drop-offs for higher values of \(\beta\). This is not surprising since higher values of \(\beta\) correspond to stronger radiation pressure. Thus, broader ring widths show dust grains that are actively spiraling out of the system. We also note that the error bars tend to be larger for larger values of \(\beta\). This is also not surprising since these discs tend to have smaller sample sizes by the end of a controlled simulation, since the stronger radiation pressure ejects a higher percentage of grains from the system.
Figure 9: Surface density profile for a system with a 1 M\({}_{\odot}\) star and a \(10^{-3}\) M\({}_{\odot}\) planet as a function of radius for \(\beta=0.15\). Function fit (in red) to the distribution. The radii that define the boundaries between the three regimes are \(r_{A}=5.67\) au and \(r_{B}=5.98\) au. There is a sharp inner edge, indicating the existence of a gap in the disc created by the forbidden zone predicted from the restricted three-body problem. There is a more shallow decline at large distances caused by dust grains taking their time spiraling out of the system due to stellar radiation pressure. The characteristic lengths of the Gaussian and exponential fits allow us to quantify the width of the ring. These width measurements can be compared to observations since only the brightest, densest regions would be observable. For this particular value of \(\beta=0.15\), we find the normalized ring width to be \(\frac{\Delta R}{R}=0.148\,^{+0.023}_{-0.025}\), thus meeting the criterion for a thin ring.
data points are the raw data obtained by Hubble Space Telescope observations, the blue curve is the fit performed by Kalas et al. (2005), and the black curve is our function's fit to the same data. If the Fomalhaut debris disc were formed by a hypothetical planet's irregular satellite cloud, we can estimate that planet's semi-major axis by scaling up our simulated system. Specifically, we scale up the simulated system from Fig. 9 so that the peak of the simulated disc's radial profile (5.675 au) matches the peak of the Fomalhaut debris disc's radial profile (144 au). Our model predicts that the planet feeding this disc would have a semi-major axis of 133 au. We fit the piecewise function to the radial distribution to characterize both the inner edge and outer edge. However, our inner edge is best described by a single-sided Gaussian as opposed to either a power law or an exponential tail. We find that the inner edge has a characteristic length of 2.23 au and that the outer edge has an exponential decay scale of 20.6 au. The inner edge behavior is very similar to what Kalas et al. (2005) found in their fit for Fomalhaut, but our function has a more gently-solved outer edge than their fit. If we measure the ring width of the Fomalhaut disc, we get \(\Delta R/R=0.215\), not significantly different from that of Kalas et al. (2005). Thus, the observed profile of the Fomalhaut debris disc is well fit by that expected for an IBDD.
As a reminder, the thickness of the forbidden zone from the restricted three-body problem governs not only the thickness of any ring gaps that form, but also defines the size of the offset between the peak of the radial distribution and the location of the planet (e.g., Chiang et al., 2009; Wisdom, 1980). Such a prediction can be made because the planet is located approximately halfway between the inner and outer edge of the forbidden zone, as shown in Fig. 5, for example. In the case of a finite \(\beta\), we expect the peak of the radial distribution to occur just outside of the outer edge of the forbidden zone. Such a phenomenon should occur because the outer edge of the forbidden zone is also a zero-velocity curve upon which a dust grain decelerates and comes to rest in the co-rotating frame, thus statistically spending more time near the forbidden zone. Therefore, we expect the planet to be a distance of one-half of the forbidden zone thickness interior to the location of the peak.
In order to physically interpret our model fit to the Fomalhaut observations, we first estimate which value of \(\beta\), for a single uniform-\(\beta\) debris disc, best corresponds to the parameters derived from our fit. We start by plotting two key parameters, \(\sigma_{1}\) and \(\sigma_{2}\), as functions
Figure 11: Semilog radial profile of fiducial ISDD for \(\beta=0.15\) showing the entire disc from 0 to 100 au. An exponential decay function (blue line) was fit to the portion of the data from 20 to 100 au to determine the characteristic length of the decay. We find that the profile has a characteristic decay scale of 20.0 au.
Figure 12: Distribution of dust grains as a function of height \(z\). A Gaussian function (in red) has been fit to the distribution. The gray horizontal lines represent one scale height (\(H=0.78\) au) above and below the midplane.
Figure 13: Fomalhaut flux profile as a function of radius (Kalas et al., 2005). The red points are raw observational data. The fit of Kalas et al. (2005) is in blue, and our functional fit is in black. They showed that the inner edge of the belt can be modeled as either a power law fit with an index of \(\alpha=10.9\) or an exponential growth proportional to \(\exp(0.08r)\), where \(r\) is in units of au. Additionally, they showed that the outer edge of the belt can be modeled as either a power law fit with an index of \(\alpha=-4.6\) or an exponential decay proportional to \(\exp(-0.03r)\), where \(r\) is in units of au. Their model predicts that the planet sculpting this disc will have a semi-major axis of 133 au. By scaling up our simulated disc from its peak of 5.675 au to Fomalhaut’s peak of 144 au, we also predict the location of the underlying planet to be 133 au.
of \(\beta\), as shown in Fig. 14. These two parameters were chosen because they directly determine the measured width of the ring. As one can see, both the inner edge characteristic length and outer edge characteristic length become larger with increasing \(\beta\). The relatively flat distributions show that our model is quite robust and can replicate Fomalhaut observations for a wide variety of radiation pressure strengths, specifically \(\beta\leq 0.3\). This is possible because while there is a very weak dependence of \(\sigma_{2}\) on \(\beta\), \(\sigma_{1}\) gets much larger at larger \(\beta\).
#### 5.1.1 Fomalhaut b
In addition to the debris disc ring, Kalas et al. (2008) also detected a point source that was proposed as Fomalhaut b, a Saturn-mass planet responsible for sculpting the inner edge of the debris disc. In this scenario (Chiang et al., 2009), the planet would have a semi-major axis \(\sim 115\) au and the inner edge of the dust ring would trace the edge of the chaotic region surrounding the planetary orbit. This claim was controversial because the colours of Fomalhaut b showed little evidence for thermal emission from a giant planet, and were far more consistent with pure scattered light from the star. The reality of the source detection itself has been independently confirmed (Currie et al., 2012; Galicher et al., 2013) but further observations by Kalas et al. (2013) reveal several orbital features that make the sculpting planet hypothesis unlikely. The orbit of Fomalhaut b appears to be highly eccentric (\(e\sim 0.8\)), especially compared to the eccentricity of the disc (\(e=0.12\pm 0.01\)) (MacGregor et al., 2017), so that it would pass through the debris disc if it were not inclined at \(\sim 17^{\circ}\) to the disc. A planet on such an orbit would be unlikely to gravitationally sculpt the observed structure, as the high eccentricity and nonzero inclination does not correspond to the correct orbital geometry to maintain the original model. Nor is an object on this orbit likely to be the source of an irregular satellite debris disc, at least according to our model, due to the disparities in both eccentricity and semi-major axis. Kalas et al. (2013) estimates the semi-major axis of Fomalhaut b to be 177 au, much larger than that of the planet we propose, which would be located at 133 au (though their margin of error is quite large at \(\pm 68\) au).
In order to explain the colours of the original Fomalhaut b hypothetical planet, Kennedy & Wyatt (2011) developed a model starting from a similar hypothesis as ours. They constructed a collisional cascade of irregular satellites within a fraction of the Hill sphere of a giant planet, taking into account the strength versus self-gravity of the satellite. They took into account both radiation pressure and Poynting-Robertson (PR) drag for the resulting dust grains. Kennedy & Wyatt (2011) focussed on the appearance of dust confined within the Hill sphere, as a source population for the scattered light observed from Fomalhaut b. In our model, we focus on the dust that has escaped into heliocentric orbit, as the origin of the debris disc itself - not the point source.
### HR 4796A
HR 4796A is an 8 Myr old A0V star that hosts a well-studied debris disc at a distance of 72.8 pc from Earth. The disc has an exceptionally high infrared excess of \(f=L_{\rm IR}/L_{*}=4.2\times 10^{-3}\)(Jura, 1991). HR 4796A has been imaged in multiple wavelengths including the sub-mm, the mm, mid-infrared, near-infrared, and visible. Combining these different wavelength regimes permits extensive modelling of the spectral energy distribution (SED) of the system. A complete understanding of the SED leads to understanding of the underlying dust composition of the disc. Previous studies have resolved a circular disc structure with a radius of \(\sim 77\) au, with a sharply peaked radial profile, and a \(\sim 1\) au offset from the location of the star. We can learn more about the dynamics of the system from detailed modeling of the exact geometry.
#### 5.2.1 HR 4796A ring width
In 2009, the Hubble Space Telescope resolved the debris disc around HR 4796A and found that it has a ring width of 18.5 au and a radius of 76 au (Schneider et al., 2009). Thus, its normalized ring thickness is \(\Delta R/R=0.25\), comparable to that of Fomalhaut and our simulated disc. All three are well within the definition of Hughes et al. (2018) for a narrow ring.
We compare our model to observations of HR 4796A made by Schneider et al. (2009) using the Hubble Space Telescope Imaging Spectrograph. Specifically, we fit our three-regime piecewise function to the intensity profile for a direct one-to-one comparison and extract a normalized ring width, as shown in Fig. 15. If the HR 4796A debris disc were formed by a hypothetical planet's irregular satellite cloud, we can estimate that planet's semi-major axis by scaling up our simulated system. Specifically, we scale up the simulated system from Fig. 9 so that the peak of the simulated disc's radial profile (5.62 au) matches the peak of the HR 4796A debris disc's radial profile (76.5 au). Our model predicts that the planet feeding this disc will have a semi-major axis of 70.8 au. We find that our function does an overall good job of fitting the inner edge of the disc, but falls to zero more quickly than the HST data. Mathematically, our function drops to zero quickly since the inner edge is defined by a single-sided Gaussian. The discrepancy may be due to background noise in the HST data. As for the outer edge, our function initially drops off a little bit more quickly than the HST data. As a result, the normalized ring width is slightly lower than that derived from the HST observations, but there is not a significant difference. We once again used the full
Figure 14: Inner edge characteristic length and outer edge characteristic length for various values of \(\beta\) for a system with a \(1\) M\({}_{\odot}\) star and a \(10^{-3}\) M\({}_{\odot}\) planet. These simulations have the same initial conditions as the simulations shown in Fig. 10. The inner edge, \(\sigma_{1}\), generally has greater lengths with increasing \(\beta\). Interestingly, the outer edge, \(\sigma_{2}\), generally has relatively constant length as a function of \(\beta\). However, the sum of \(\sigma_{1}\) and \(\sigma_{2}\) shows that ring width increases as a function of \(\beta\). The data begin to become unreliable and noisy at \(\beta=0.3\) due to a small surviving sample size.
width at half maximum (FWHM) of the radial profile method for determining ring width and find that the ring with for HR 4796A is \(\Delta R/R=18.3\) per cent. We note that HR 4796A, Fomalhaut, and our simulated disc all fall within the definition of a 'thin ring' as defined by Hughes et al. (2018).
#### 5.2.2 HR 4796A blowout size
We compare the particle sizes predicted by our model to those predicted by other models for the HR 4796A system. We can calculate the dust grain size corresponding to \(\beta=0.1\) for the stellar parameters of HR 4796A, namely the luminosity of 23 L\({}_{\odot}\) and mass of 2.18 M\({}_{\odot}\), using Equation 3. This specific value of \(\beta\) was chosen because it fulfills the criterion laid out in Appendix B to ensure overflow through the L\({}_{2}\) Lagrange point. Rearranging Equation 3 to solve for the blowout size \(D_{\rm bl}\), we obtain
\[D_{\rm bl}\approx(21.4~{}\mu\rm{m})\left(\frac{\beta}{0.1}\right)^{-1}\left( \frac{L_{*}}{23~{}L_{\odot}}\right)\left(\frac{M_{*}}{2.18~{}M_{\odot}}\right) ^{-1}. \tag{10}\]
The result gives us a dust grain diameter of \(D\approx 21.4~{}\mu\rm{m}\). Chen et al. (2020) derived a similar grain size of 25 \(\mu\rm{m}\) by using MCFOST on SPHERE SPF data. Milli et al. (2017) found the grain size range 17.8-30 \(\mu\rm{m}\) fit the data depending on the exact scattering model used.
A general rule of thumb states that dust grains are best observed in electromagnetic radiation at wavelengths that are approximately equal to their size (e.g., Hughes et al., 2018). This phenomenon can be explained as a balance between two opposing processes. We first consider the fact that the smallest grains dominate the grain size distribution and thus contribute the most to the total cross section. However, there is a competing effect where grains can only efficiently emit at wavelengths that are smaller than their actual size. A sharp cutoff in this emission efficiency occurs at larger wavelengths. All in all, the total light emitted will receive contributions from the smallest grains that are able to efficiently emit from that wavelength. The observing wavelengths versus \(\beta\) are highlighted in Table 1. Although debris discs were traditionally detected using IR excesses, we are interested in comparing our simulated discs to scattered light images since IR excesses do not give us a geometric picture. Imaging capabilities can vary strongly amongst the different bands in Table 1.
### Dependence on planet mass
Generally speaking, we expect the morphologies of ISDDs to depend on the ratio of planet mass to stellar mass \(M_{\rm p}/M_{*}\). This dependence arises because the Jacobi contours of the restricted three-body problem depend only on the ratio of the masses of the secondary body to the primary body \(M_{2}/M_{1}\), both with and without radiation pressure. For example, a Saturn-like planet orbiting around an M dwarf could have the same normalized Jacobi contours as a Jupiter-like planet orbiting around a G-type star.
Since the thickness of the Jacobi forbidden zone is roughly the same size as the diameter of the Hill sphere, we expect any ensuing ring gaps to be a similar size to the Hill diameter as well. This effect would likely only be noticeable in low radiation pressure scenarios, since those are the only cases where a significant number of dust grains escape interior to the planet's orbit and would therefore produce an observable ring gap in the radial distribution. Appendix C goes into greater detail the theoretical foundation of exactly how the mass ratio correlates with the critical \(\beta\).
### General Predictions for Other Systems
#### 5.4.1 Wavelength Dependence
Different observing wavelengths will be able to probe different structures of potentially the same debris disc. However, in any given system, grain size is just a guideline for observing wavelength. Crudely, wavelength corresponds to grain size, so we do have broad predictions about how things should appear. For example, since observing wavelength is expected to be directly proportional to grain size and therefore inversely proportional to \(\beta\), we predict that the long-wavelength infrared (\(\lambda=8-15~{}\mu\rm{m}\)) will find singular thin rings. Due to the stronger influence of radiation pressure, mid-wavelength infrared (\(\lambda=3-8~{}\mu\rm{m}\)) will detect comparatively broader rings than found in the long-wavelength infrared, but still one singular ring. However, in the far infrared (\(\lambda=15~{}\mu\rm{m}-1~{}mm\)), which can be affected by values of \(\beta\) as low as 0.002, we expect there to be a gap in the ring since Roche lobe overflow through the L\({}_{1}\) Lagrange point is almost equally favorable to occur energetically. In this instance, dust escapes both inward and outward through L\({}_{1}\) and L\({}_{2}\), but the Jacobi forbidden zone prevents the two populations from mixing, giving rise to the gap in the ring.
We calculate in Appendix A that an initial irregular satellite population mass on the order of 1 M\({}_{\oplus}\) is necessary to ensure that the Hill sphere collisional time-scale (\(\sim\)135 Myr) is less than the age of the
\begin{table}
\begin{tabular}{c c c} \hline \hline Division name & Wavelength & \(\beta\) \\ \hline \hline Mid-wavelength infrared & 3 - 8 \(\mu\rm{m}\) & 0.21 - 0.55 \\ \hline Long-wavelength infrared & 8 - 15 \(\mu\rm{m}\) & 0.11 - 0.21 \\ \hline Far infrared & 15 \(\mu\rm{m}\) - 1 mm & 0.002 - 0.11 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of correspondence between observing wavelength and relative strength of radiation pressure \(\beta\).
Figure 15: The best fit of our three-regime piecewise function fitted to an HR 4796A flux profile from Schneider et al. (2009). A dashed red line is plotted to indicate 50 per cent of the peak value to help visualize the FWHM. We find that the normalized ring width of the system is \(\Delta R/R=18.3\) per cent. Our model predicts that the planet sculpting this disc will have a semi-major axis of 70.8 au.
age of the system, 440 Myr in the case of Fomalhaut. While 1 M\({}_{\oplus}\) of irregular satellites does seem quite large compared to the estimated \(10^{-3}\) M\({}_{\rm L}\) worth of irregular satellites estimated to have been found around each giant planet in our Solar System (Bottke et al., 2010), a Jupiter-mass planet found at a semi-major axis consistent with the scale of the Fomalhaut system (\(\sim\)140 au) orbiting around a 2 M\({}_{\odot}\) star would have a Hill sphere \(\sim 4\times 10^{4}\) times more voluminous than that of Jupiter and a commensurate cross section.
#### 5.4.2 Evolutionary Implications
At first glance, irregular satellite collisions would appear to not explain the debris discs found around such aged systems as Fomalhaut (440 Myr), due to how quickly they grind down to dust and dissipate (\(\sim\)tens to hundreds of thousands of years). Thus, irregular satellite debris discs would only be bright enough to be detectable in their infancy. However, irregular satellites are not formed in the same way as regular satellites, and thus do not have the same age as their host planet. In our own Solar System, they are thought to be the result of dynamical capture during late-stage rearrangement of the giant planet orbits (Nesvorny et al., 2007), hundreds of Myr after the formation of the Solar System. This delay may help us explain the age of some older debris disc systems. Our model is not intended to explain every debris disc, but is focussed on the curious geometry of the thin ring systems. Within the proposed context, the observed thin rings indicate systems that have recently emerged from a period of dynamical excitation which resulted in the capture of irregular satellites around giant planets.
The size of the dust in such discs may also be a function of time, because the \(\beta\) of the escaping material is set by the balance between residence and collision times. The latter will increase as the mass reservoir in the source population grinds down, moving the characteristic \(\beta\) of the escaping particles to lower values, and therefore increasing the size of the particles in the disc.
## 6 Conclusions
In this paper, we explored the effects of including radiation pressure into the classical restricted three-body problem. We found that the traditional Roche lobe overflow can be replaced by Lagrange point L\({}_{2}\) overflow for a sufficiently high \(\beta\) for a given planet-to-star mass ratio \(\mu_{2}/\mu_{1}\). Sample orbital integrations reveal that individual dust grains typically trace out 'flower petal' orbits, coming to rest on the zero-velocity curves for some time.
We assumed that the origin of dust grains in our model were from collisions between the giant planet's irregular satellites. We motivated our initial conditions based off of observations of the Solar System giant planets' irregular satellites today as well as what previous studies' determined from their dynamical history. We describe the size distribution of bodies ensuing from irregular satellite collisions as a collisional cascade power law distribution. We calculate the catastrophic collisional time-scale and compare it to an empirically determined residence time-scale to determine the critical \(\beta\) at which ground down dust grains can escape the Hill sphere.
Our N-body simulations show that dust grains with a \(\beta\) above \(\beta_{\rm crit}\) quickly escape from the Hill sphere and transition from a circumplanetary orbit to a circumstellar orbit. After a short time, a large population of dust grains achieve an azimuthally symmetric disc appearance. We evaluated this azimuthal symmetry by comparing the fluctuations in the azimuthal profile to the average column density and found that they were low. We also calculated the average radius along a given azimuthal angle \(\theta\) and found that the mean and median radius is consistent along all azimuthal angles.
We fit a piecewise function with an Gaussian inner edge and exponential outer edge to the radial profile. These functions naturally allowed us to quantify the ring width for various values of \(\beta\). We normalized the ring width over the ring radius as is standard in the literature (\(\Delta R/R\)), and find that normalized ring width broadens as a function of \(\beta\). We explain this finding as stronger radiation pressure being able to excite dust grains to more eccentric orbits and therefore broadening the overall distribution. Since the vertical profile of the disc resembles a typical Gaussian, we conclude that the overall shape of the disc is a torus.
We compared our results to observations for the specific systems of Fomalhaut and HR 4796A, but also make general predictions for all systems. For the assumption of uniform density spherical dust grains, there is an inverse relationship between observing wavelength and \(\beta\). We find that the topology of the debris disc is dictated by the original Jacobi forbidden zone contours, so the fundamental parameter is the planet-to-star mass ratio \(M_{2}/M_{1}\).
We test the validity of our radial profile fitting function by applying it to the raw Hubble Space Telescope data of Fomalhaut from Kalas et al. (2005). We obtain very similar results to their fit in terms of inner and outer edge slopes. By defining a ring width for Fomalhaut as its full width at half maximum, we measure its normalized ring width to be 0.191, comparable to our model's \(\Delta R/R=0.13\), both of which are within the 'thin ring' definition defined in Hughes et al. (2018) of \(\Delta R/R=0.5\). We note that there is an ongoing debate about whether Fomalhaut b is a planet or a transient dust cloud and clarify that due to its inclined orbital plane with respect to the disc plane, we do not assume Fomalhaut b is the source of the debris disc in our model, but rather some other underlying hidden planet.
For the assumption of a Sun-like star, we make general predictions about distinctions between observing wavelengths in the mid-wavelength infrared, long-wavelength infrared, and far infrared. We address the fact that while Solar System irregular satellite swarms tend to grind down very quickly on time-scales of tens to hundreds of thousands of years, they can still explain very old, large systems such as Fomalhaut once properly scaled up using Kepler's Third Law since irregular satellites were not expected to be captured until the Late Heavy Bombardment period in our Solar System.
## Acknowledgements
This research has made use of NASA's Astrophysics Data System. This research was supported by NASA Grant 443820-HN-21577.
## Data Availability Statement
The data underlying this article will be shared on reasonable request to the corresponding author.
| 不規則衛星は、太陽系の大気惑星を回る小規模の天体で、大Semi-major軸、Eccentricities、およびinclinationsを持つ。過去の研究によると、太陽系における不規則衛星は、現在非常に衝突的に進化した集団であり、数百Myrの経過とともに、初期の質量の約99%を失っている。このような進化は、不規則衛星の過去に、塵の衝突的デブリの集団が形成されたことを示唆しており、その結果、恒星の光が再処理されるため、観測可能な可能性がある。この論文では、このプロセスに対する外太陽系類似体のデブリ盤の痕跡を調べます。放射圧力は、塵粒子の惑星近傍球から解放されるために使用されるパラメータ$\beta$であり、惑星近傍球の周囲に小石の帯がない場合でも、Circumstellar塵環の形成を引き起こします。私たちのシミュレーションされた盤 |
2305.04106 | On the Usage of Continual Learning for Out-of-Distribution
Generalization in Pre-trained Language Models of Code | Pre-trained language models (PLMs) have become a prevalent technique in deep
learning for code, utilizing a two-stage pre-training and fine-tuning procedure
to acquire general knowledge about code and specialize in a variety of
downstream tasks. However, the dynamic nature of software codebases poses a
challenge to the effectiveness and robustness of PLMs. In particular,
world-realistic scenarios potentially lead to significant differences between
the distribution of the pre-training and test data, i.e., distribution shift,
resulting in a degradation of the PLM's performance on downstream tasks. In
this paper, we stress the need for adapting PLMs of code to software data whose
distribution changes over time, a crucial problem that has been overlooked in
previous works. The motivation of this work is to consider the PLM in a
non-stationary environment, where fine-tuning data evolves over time according
to a software evolution scenario. Specifically, we design a scenario where the
model needs to learn from a stream of programs containing new, unseen APIs over
time. We study two widely used PLM architectures, i.e., a GPT2 decoder and a
RoBERTa encoder, on two downstream tasks, API call and API usage prediction. We
demonstrate that the most commonly used fine-tuning technique from prior work
is not robust enough to handle the dynamic nature of APIs, leading to the loss
of previously acquired knowledge i.e., catastrophic forgetting. To address
these issues, we implement five continual learning approaches, including
replay-based and regularization-based methods. Our findings demonstrate that
utilizing these straightforward methods effectively mitigates catastrophic
forgetting in PLMs across both downstream tasks while achieving comparable or
superior performance. | Martin Weyssow, Xin Zhou, Kisub Kim, David Lo, Houari Sahraoui | 2023-05-06T18:00:21 | http://arxiv.org/abs/2305.04106v2 | On the Usage of Continual Learning for Out-of-Distribution Generalization in Pre-trained Language Models of Code
###### Abstract.
Pre-trained language models (PLMs) have become a prevalent technique in deep learning for code, utilizing a two-stage pre-training and fine-tuning procedure to acquire general knowledge about code and specialize in a variety of downstream tasks. However, the dynamic nature of software codebases poses a challenge to the effectiveness and robustness of PLMs. In particular, world-realistic scenarios potentially lead to significant differences between the distribution of the pre-training and test data, _i.e._, distribution shift, resulting in a degradation of the PLM's performance on downstream tasks. In this paper, we stress the need for adapting PLMs of code to software data whose distribution changes over time, a crucial problem that has been overlooked in previous works. The motivation of this work is to consider the PLM in a non-stationary environment, where fine-tuning data evolves over time according to a software evolution scenario. Specifically, we design a scenario where the model needs to learn from a stream of programs containing new, unseen APIs over time. We study two widely used PLM architectures, _i.e._, a GPT2 decoder and a RoBERTa encoder, on two downstream tasks, API call and API usage prediction. We demonstrate that the most commonly used fine-tuning technique from prior work is not robust enough to handle the dynamic nature of APIs, leading to the loss of previously acquired knowledge _i.e._, catastrophic forgetting. To address these issues, we implement five continual learning approaches, including replay-based and regularization-based methods. Our findings demonstrate that utilizing these straightforward methods effectively mitigates catastrophic forgetting in PLMs across both downstream tasks while achieving comparable or superior performance.
2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023
ACM SIGKDDDDDDiversity of Montreal
3
## 1. Introduction
Prior research (Krishnan et al., 2017; Krizhevsky et al., 2014; Krizhevsky et al., 2014) on code representation learning leverages a ubiquitous two-stage procedure to effectively train and specialize pre-trained language models (PLMs) for code-related downstream tasks. The first stage, _i.e._, the pre-training, involves optimizing the model using self-supervised learning on a large dataset to acquire general knowledge about code. This pre-training phase allows the model to adapt to downstream tasks in the second stage, _i.e._, the fine-tuning. Previous studies (Krishnan et al., 2017; Krizhevsky et al., 2014; Krizhevsky et al., 2014) typically leverage classical transfer learning methods, which consist of "transferring" the pre-trained knowledge to the target task by fine-tuning the model on a task-specific loss function and data. This approach has been successful in the fields of natural language processing (NLP) (Krizhevsky et al., 2014; Krizhevsky et al., 2014) and deep learning for code (Krizhevsky et al., 2014; Krizhevsky et al., 2014).
In this perspective, previous works (Krizhevsky et al., 2014; Krizhevsky et al., 2014) have primarily focused on stationary settings, neglecting the practical need for models to adapt to changing environments and data over time. Most prior research (Krishnan et al., 2017; Krizhevsky et al., 2014; Krizhevsky et al., 2014) has suggested using transfer learning to fine-tune the model in static environments rather than addressing the dynamic nature of real-world scenarios. In practice, programming languages, software libraries and APIs are prone to change
Figure 1. Continual fine-tuning of a pre-trained language model of code. After pre-training, the model needs to adapt to new out-of-distribution (OOD) program data over time.
and evolution [25, 43, 45], leading to shifts in the distribution of the underlying software data over time, it is also known as concept drift [37, 60]. By ignoring the actual evolution of software codebases, existing studies [10, 59] have focused on fine-tuning and testing pre-trained models of code using stationary datasets. In practice, the software evolution potentially leads to a noticeable difference between training and test data, _i.e._, distribution shift, that is often not present in these stationary datasets. This phenomenon also occurs when the model is put into production and has to deal with real-world data [4, 23]. We argue that creating datasets that reflect real-world software evolution scenarios and distribution shifts is crucial in order to properly evaluate the **out-of-distribution (OOD) generalization** capability of code models [50]. The OOD generalization measures a model's ability to generalize to new, unseen data with a significantly different distribution from the training data. Therefore, evaluating how PLMs of code generalize to OOD software data in software evolution scenarios appears as a prime issue.
Existing works on OOD generalization designed the datasets based on various distribution shifts in source code data [22, 27]. However, they did not address the problem of continually adapting a pre-trained model of code to streams of OOD data. The prime goal of our study is to explore methods for a model to better adapt to software evolution scenarios. In this context, we ask: _how to effectively continually fine-tune a pre-trained model of code to adapt to new data while still considering the past data?_ (see Fig. 1). Over the past years, **continual learning (CL)**[44, 60] has emerged to address this problem, which is relevant to a wide range of research areas, including computer vision [5, 31, 35, 52] and NLP [6, 8, 53]. Although transfer learning methods are not tailored for continual learning scenarios, they can still operate to fine-tune a model on streams of data. However, these methods lack robustness, leading to unwanted phenomena such as forgetting past information, known as catastrophic forgetting [18, 40]. There exist other strategies, such as retraining the model from scratch using new data, which are also impractical due to the tremendous computational intensity in the pre-training phase. Motivated by these issues of the existing models, we attempt to investigate more robust and scalable fine-tuning techniques. We hypothesize that continual learning techniques may provide significant benefits over classical transfer learning in this context.
In this paper, we delve into the behavior of PLMs of code in a continual fine-tuning scenario, as depicted in Fig 1. Our objective is twofold: (1) to assess the out-of-distribution generalization capability of PLMs of code and (2) to investigate effective continual fine-tuning strategies to fine-tune the models in the presence of a stream of OOD data. Specifically, we address these challenges in a scenario reflecting how typical software codebases may evolve in practice. To this end, we create five OOD domain datasets, each introducing new, unseen APIs by the models during their pre-training phase. These OOD datasets intend to simulate a stream of data for continual fine-tuning, and each dataset entails a significant distribution shift with respect to the pre-training data. As such, our setting establishes an OOD generalization problem. We consider two widely used model architectures: a GPT2-like [46] decoder and a RoBERTa-like [34] encoder pre-trained on code. To eliminate any data leakage between the pre-training and fine-tuning data, we decided to pre-train our models from scratch. We do not study the popular existing PLMs like CodeBERT [17] or CodeT5 [58] because they may be prone to potential data leakage, _i.e._, seeing the OOD data in pre-training, that we cannot precisely control. We evaluate the models on two downstream tasks: API call prediction and API usage prediction. In the first task, the model attempts to predict API calls resulting in a single code token, given code tokens appearing before the call site. On the other hand, the second task involves the generation of the whole API usage resulting in a sequence of code tokens with the same input format as the prior task. Together, these two tasks provide a comprehensive evaluation of the model's performance in different code generation scenarios.
We start by investigating the impact of OOD data on the performance of the GPT2-like decoder on both downstream tasks in a zero-shot setting, _i.e._, without fine-tuning the model on the new OOD data. We find that the model consistently fails to generalize to OOD data by highlighting significant gaps in performance compared to in-distribution data across six evaluation metrics (_e.g._, up to \(75\%\) drop in BLEU score). This finding strongly suggests that pre-training itself is not sufficient and cannot solve OOD generalization in PLMs of code. We then evaluate the models' performance in the continual fine-tuning scenario using classical transfer learning and observe notable catastrophic forgetting. To address this issue, we implement a straightforward yet computationally inefficient cumulative fine-tuning approach by utilizing a replay buffer of infinite size. The results show that the approach drastically mitigates forgetting. Finally, we compare the performance of classical transfer learning to that of replay-based and regularization-based continual learning methods. Replay methods are considered tough-to-beat strategies for continual learning and consist of maintaining a small replay buffer containing samples from previously seen data. During fine-tuning, we use the replay buffer in conjunction with the current OOD training set to fine-tune the PLM. We explore regularization-based methods, including EWC [31], SI [66] and RWalk [9], which add regularization terms to the loss function at fine-tuning to prevent extensive changes in important parameters of the PLM. We chose those methods as they are computationally efficient, well-known, and considered strong baselines in the continual learning literature. We discover that those continual learning methods significantly reduce forgetting while achieving similar or superior effectiveness on both tasks.
To the best of our knowledge, this work constitutes the first initiative to study continual fine-tuning for OOD generalization of PLMs of code. We believe that the impact of continual learning in this research area has the potential to be far-reaching, particularly due to the inherent evolution of software data over time, and we discuss this aspect in more detail in the discussion section of the paper (see Section 5). Our contributions can be summarized as follows:
1. We demonstrate that PLMs of code fail to generalize to OOD data and highlight the need for further investigation in this area.
2. We conduct a study on the behavior of two pre-trained model architectures of code in a continuous learning environment, showing that classical transfer learning lacks robustness and is prone to catastrophic forgetting.
3. We compare five continual learning methods, including replay-based and regularization-based approaches, in our continual fine-tuning scenario. We show the superiority of continual learning over classical transfer learning.
4. We provide a large-scale dataset of Java code snippets and their API usage sequences, including pre-training data and a procedure for extracting OOD data.
**Organization.** In Section 2, we discuss preliminaries on continual learning. In Section 3, we go through our experimental design. We present the results of our experiments in Section 4. In Section 5, we discuss the threats to the validity of our study, as well as potential broader impact and future research directions. We introduce the related work on out-of-distribution generalization and continual learning for pre-trained language models in Section 6. Finally, we discuss some future work and conclude this work in Section 7.
## 2. Preliminaries on continual learning
Existing PLMs such as BERT (He et al., 2019) or GPT (Beng et al., 2019) typically operate in transfer learning settings. By using a two-stage pre-training/fine-tuning procedure, these models can be specialized for a wide range of downstream tasks. However, in this setting, the data used for pre-training or fine-tuning are often assumed to be stationary, which is not reflective of real-world situations. In practice, transfer learning methods can still be applied to non-stationary data, such as a stream of data, but this technique is prone to catastrophic forgetting (Krizhevsky et al., 2014; Krizhevsky et al., 2015; Krizhevsky et al., 2016).
To address the above issues, prior works (Krizhevsky et al., 2015; Krizhevsky et al., 2016; Krizhevsky et al., 2016; Krizhevsky et al., 2017; Krizhevsky et al., 2018) introduced the concept of _continual learning_ and designed specific techniques to mitigate catastrophic forgetting. The primary assumption for continual learning is that the neural network should possess the ability to adapt to new data or tasks while maintaining stability on previous data or tasks, often referred to as the plasticity-stability dilemma. Considering continual learning is particularly interesting for OOD generalization problems, as continual learning methods focus on a keeping good plasticity-stability trade-off. Altogether, it has to potential to enhance the generalizability of PLMs to a broader range of data. Continual learning methods often operate in constrained scenarios, and Hadsell et al. (Hadsell et al., 2019) outline a comprehensive list of objectives to balance in continual learning scenarios. There exist three main categories of methods for continual learning as defined in a previous study (He et al., 2019). _Replay-based methods_ store samples from previous experiences, _i.e._, previous stream of data, in a replay buffer or use generative approaches to generate examples similar to those of previous experiences. The replay buffer is used in conjunction with the current experience data to train the model. Replay-based methods help the network gain stability by enabling the network to train on previous samples, _i.e._, stored in the replay buffer while adapting to new data. _Regularization-based methods_ add a regularization term to the loss function to prevent catastrophic forgetting by penalizing changes to important neural network parameters. Examples of regularization-based methods include EWC (Krizhevsky et al., 2016), SI (Zhu et al., 2017) and RWalk (Beng et al., 2019). Finally, _parameter isolation methods_ use dynamic architectures to incorporate knowledge from previous experiences to mitigate interference (Zhu et al., 2017).
## 3. Experimental design
In this section, we describe the experimental setup of our study. We carefully control our data and model setup to implement our out-of-distribution scenario. We first outline the construction of our dataset and the generation of OOD data for continual fine-tuning. Next, we discuss the pre-training procedure of our models, the target downstream tasks and evaluation metrics. We present the results of our experiments in Section 4.
### Dataset construction
Pre-training language models from scratch require a large amount of data for the loss of the model to converge. With that in mind, we constructed our large dataset using programs crawled from GitHub using Google BigQuery1. Specifically, we focused on Java programs and began by collecting all java files stored in GitHub repositories. Next, we used Group (Zhu et al., 2017) to extract all methods defined in the java files along with their API usage sequences. We extracted the API usage sequences to facilitate our data splitting and obtain the position of each API site inside the methods to implement our downstream tasks. Each sample consists of all the tokens of a method. To avoid duplication bias in our experiments (Beng et al., 2019), we deduplicated the dataset by comparing the hash of each method. The resulting dataset contains more than 68M Java methods. For our experiments, we shuffled these 68M methods and randomly selected 10M methods to constitute our initial dataset. Fig. 2 illustrates how we further split the data for our experiments. Because we chose the pre-train PLMs from scratch, we have to split our data into in-distribution (ID) data, used for model pre-training, and OOD data, used for continual fine-tuning. We also need to properly extract the OOD data to align with our scenario consisting of introducing new, unseen APIs over time to the PLM during fine-tuning.
Footnote 1: [https://cloud.google.com/bigquery](https://cloud.google.com/bigquery)
_Out-of-distribution dataset - \(\mathcal{D}_{OOD}\)._ We create five OOD datasets, \(\mathcal{D}^{1}_{OOD},...,\mathcal{D}^{5}_{OOD}\). Each OOD dataset represents a unique domain that encompasses a high-level functionality of APIs. For example, we have a domain _Security_ that comprises APIs related to programming security-related code and a domain _Guava_ that includes only APIs from the Guava2 library. To create each OOD dataset, we randomly select 10 interfaces from packages/libraries related to their domain. Finally, we associate to each domain dataset all APIs
Figure 2. Procedure to extract the ID data used for model pre-training, and the OOD data used for continual fine-tuning.
within the selected interfaces, excluding class construction methods. Table 1 summarizes the dataset \(\mathcal{D}_{OOD}\), which contains 147,245 samples in total.
To form each OOD dataset, we select samples from the pool of 10 million Java methods that manipulate at least one of their associated API. In our experiments, we perform continual fine-tuning on the training sets associated with the OOD dataset \(\mathcal{D}_{OOD}^{1},...,\mathcal{D}_{OOD}^{5}\) sequentially. Therefore, to prevent data leakage, we exclude samples that manipulate APIs from multiple domains. This elimination of samples removes a significant threat to the validity of our OOD scenario and ensures that APIs are introduced as intended during the fine-tuning process. To obtain representative test sets, we randomly select 10% of samples that manipulate each API within each OOD dataset and used the selected samples to form the corresponding domain test set.
_In-distribution dataset_ - \(\mathcal{D}_{ID}\).We obtain \(\mathcal{D}_{ID}\) by removing the samples in \(\mathcal{D}_{OOD}\) from the initial data. Then, we shuffle \(\mathcal{D}_{ID}\) and randomly select 50,000 samples for test (\(\mathcal{D}_{ID\_test}\)). \(\mathcal{D}_{ID\_FT}\) contains the remaining samples for pre-training, and we randomly select 100,000 for model validation (\(\mathcal{D}_{ID\_PT\_valid}\)). In particular, those samples allow us to monitor the evolution of the loss of the model on an independent validation set to avoid overfitting the pre-training data. In total, the pre-training set \(\mathcal{D}_{ID\_PT\_train}\) contains more than 9M samples to pre-train the models.
### Models and tasks setup
In this work, we consider two widely-used deep learning architectures for code: a RoBERTa-like encoder (Zhu et al., 2017) and a GPT2-like decoder (Zhu et al., 2018).
_Decoder_ - \(\mathcal{M}_{dec}\).The decoder model is based on the GPT-2 architecture, with the same hyperparameters, and is pre-trained using a causal language modeling objective, _i.e._, left-to-right next token prediction. As we conducted our experiments under limited resources, we implemented a small version of GPT-2 with 110 million trainable parameters and pre-train the model for 100,000 steps. We use early stopping to select the best model checkpoint, based on the loss on the validation set \(\mathcal{D}_{ID\_PT\_valid}\).
_Encoder_ - \(\mathcal{M}_{enc}\).The encoder model is based on the RoBERTa architecture, with the same hyperparameters, and is pre-trained using a masked language modeling objective. We implemented a base version of RoBERTa. The model has 125 million trainable parameters and is pre-trained similarly to the decoder model, with early stopping used to select the best checkpoint.
_Downstream tasks_.We employ two downstream tasks to evaluate the ability of our PLMs of code to learn and adapt to new software data that introduce new, unseen APIs over time. Fig. 3 illustrates both tasks. For API call prediction, the model takes as
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Dataset & Domain & Package & Interfaces & \# train & \# test \\ \hline \multirow{4}{*}{\(\mathcal{D}_{OOD}^{1}\)} & \multirow{4}{*}{General} & java.util.concurrent & BlockingQueue, ThreadPoolExecutor & & \\ & &java.math & BigInteger & & 47,213 & 5,239 \\ & &java.util & Based, Treeset & & & \\ & &java.net & FurdjunPool, Proxy, ServerSocket, SocketAddress, URLEncoder & & & \\ \hline \(\mathcal{D}_{OOD}^{2}\) & Security & java.security & Cipher, CodeSource, Identity, KeyFlatancy, KeyFlatancy, KeyFlatancy & 27,189 & 3,017 \\ & & & Provider, Security, Timestamp & & & \\ \hline \multirow{4}{*}{\(\mathcal{D}_{OOD}^{3}\)} & \multirow{4}{*}{Android} & android & android.view & Display, InputEvent, Window & & \\ & & android.widget & Checkbox, GridLayout & & & \\ & & android.media & AudioFormat, ImageReader & 28,400 & 3,150 \\ & & android.hardware & Camera, Sensor & & \\ & & android.database & DatabaseUtils & & & \\ \hline \(\mathcal{D}_{OOD}^{4}\) & Web & org.springframework & CacheManager, ClassPathResource, DataBuffer, HipMessage, Hipfile- & 16,295 & 1,805 \\ & & & quote, JakTCTemplate, MessageChannel, MessageHandler, TaskExecutor & & \\ \hline \multirow{4}{*}{\(\mathcal{D}_{OOD}^{i}\)} & \multirow{4}{*}{Guava} & com.google.common.graph & GraphBuilder, Network & & \\ & & com.google.common.io & BytsSource, BytsStreams & & \\ \cline{1-1} & & com.google.common.cache & Cachebuilder, LoadingCache & 13,448 & 1,489 \\ \cline{1-1} & & com.google.common.collect & ListMultimap, Multitimap & & \\ \cline{1-1} & & com.google.common.base & ChauMarketer, Splitter & & \\ \hline \hline \end{tabular}
\end{table}
Table 1. Out-of-distribution dataset details.
Figure 3. Overview of the downstream tasks. In the API call prediction task, the model outputs a list of top-\(k\) candidates to predict the API call token (_i.e._, min). In the API usage prediction task, the model attempts to predict all the tokens constituting the API usage (_interface name, method name, parameters and syntactical tokens_). The models only leverage left-context tokens to generate a prediction.
input all the tokens of the method preceding the call site of the API and generates top-\(k\) candidates. For API usage prediction, the model takes as input the same tokens as for the API call prediction task, but attempts to generate the whole API usage (interface name, method name, parameters and syntactical tokens), which constitutes a more challenging task. Note that conversely to \(\mathcal{M}_{dec}\), the encoder's architecture is not suitable for generation tasks. Therefore, we add a randomly initialized language modeling head on top of it for fine-tuning using the OOD datasets. As a result, we expect \(\mathcal{M}_{enc}\) to be less stable than \(\mathcal{M}_{dec}\) and more prone to catastrophic forgetting since the language modeling head is not pre-trained. This comparison provides valuable insights into the robustness of two different architectures.
Evaluation metricsWe measure the performance of the models on both downstream tasks with metrics used in prior works. For API call prediction, we report the Pass@k (Pasz and Karimiredan, 2018), which gives the percentage of correct predictions when considering lists of \(k\) candidates. For API usage prediction, we report BLEU score, Accuracy (exact match), and CodeBLEU (Zhu et al., 2019).
To measure how the models perform in a continual learning environment, we use two meta-metrics adapted from prior works (Golovolovolov et al., 2013; Golovolovolov and Levesque, 2015): the _Average (A)_ and _Forgetting (F)_ metrics. We define the average \(A_{M}\) of a metric \(M\) on a test dataset \(\mathcal{D}^{i}_{OOD}\) as:
\[A_{M}=\frac{1}{T}\sum_{j=i}^{T}M_{j}(\mathcal{D}^{i}_{OOD})\,,\]
where \(j\) refers to the next incremental learning steps after the \(i\)-th included. \(M_{j}\) denotes an evaluation metric, _e.g._, Pass@k, computed at time step \(j\) on the test set and \(T\) denotes the maximum number of fine-tuning steps, _i.e._, five in our case. The Average metric only gives information on how accurate the model is but does not provide any insight into its ability to mitigate catastrophic forgetting. We define the forgetting \(F^{k}_{M}\) of a metric \(M\) on a test dataset \(\mathcal{D}^{i}_{OOD}\) at time step \(k\) as:
\[F^{k}_{M}=M_{i}(\mathcal{D}^{i}_{OOD})\,-\,M_{k}(\mathcal{D}^{i}_{OOD})\,,\,\, i<k\;.\]
This is the difference between the first time the metric is computed, _i.e._, after fine-tuning the model on \(\mathcal{D}^{i}_{OOD}\) at time step \(i\), and the metric computed at time step \(k\). \(F^{k}_{M}\) gives information on the stability of the model, _i.e._, its capability to not forget from the past. Therefore, the lower \(F^{k}_{M}\); the better.
Implementation detailsTo pre-train \(\mathcal{M}_{dec}\) and \(\mathcal{M}_{enc}\), we used four Tesla V100-SXM2-32GB GPUs. It took about 7 days to pre-train \(\mathcal{M}_{dec}\), and 2 days to pre-train \(\mathcal{M}_{enc}\). For fine-tuning and inference, we used a single Tesla V100-SXM2-32GB GPU. We used Huggingface's libraries (Zhu et al., 2019) to implement the models and store the datasets. To implement the continual learning approaches, we used Avalanche (Pasz and Karimiredan, 2018). We provide all the implementation details of our experiments and release our data publicly in our replication package (see Data Availability section).
## 4. Experimental Results
### How does \(\mathcal{M}_{dec}\) generalize to ID and OOD data in zero-shot?
In this experiment, we evaluate the performance of the model \(\mathcal{M}_{dec}\) on the ID and OOD test data in a zero-shot setting for both downstream tasks. We do not experiment with \(\mathcal{M}_{enc}\) as the model is not capable of generating code before fine-tuning and, therefore, cannot operate in a zero-shot setting. The purpose of this experiment is twofold. First, it aims to validate the experimental setup of our study. If we observe significant differences in the evaluation metrics obtained on the ID and OOD datasets, it would suggest that our OOD scenario is well-formed and reasonable. Secondly, significant gaps between the ID and OOD test data imply that PLMs such as \(\mathcal{M}_{dec}\) still require the use of robust transfer learning or continual learning techniques to generalize to new data without forgetting about past data.
Api call predictionTable 2 reports the _Pass@1_, _Pass@5_ and _Pass@10_ on the ID and OOD test datasets. The results show that the model performs well on ID data, reaching almost 73% in _Pass@1_. However, when tested on OOD data, the performance drops significantly. The decline in performance is less severe when considering more API call candidates, but it remains a significant issue. Furthermore, variations in the performance decline are observed across different OOD datasets. For example, the model performs better on the Security domain (\(\mathcal{D}^{2}_{OOD}\)) than domains such as Android (\(\mathcal{D}^{3}_{OOD}\)) or Web (\(\mathcal{D}^{4}_{OOD}\)), which likely contain more domain-specific API calls.
Api usage predictionTable 3 reports the _BLEU_ score, Accuracy (_EM_) and _CodeBLEU_ score on both ID and OOD test datasets. The results indicate that the model performs poorly on OOD data in comparison to ID data, with significant decreases in all evaluation metrics. Additionally, we notice that the _EM_ and _CodeBLEU_ metrics vary similarly to the _Pass@k_ metrics on the API call prediction task. The Android and Web domains experience the most severe drops, whereas the Security domain experiences the least severe drop.
\begin{table}
\begin{tabular}{c c c c} \hline \hline & \multicolumn{3}{c}{Metrics} \\ \cline{2-4} Dataset & Pass@1 & Pass@5 & Pass@10 \\ \hline \(\mathcal{D}_{ID\_test}\) & 72.88 & 83.30 & 85.60 \\ \hline \(\mathcal{D}_{OOD}\) & 40.82 (44\% ) & 51.19 (38.5\% ) & 54.17 (36.7\% ) \\ \hline \(\mathcal{D}^{1}_{OOD}\) & 49.91 (31.6\% ) & 62.0 (25.6\% ) & 64.46 (24.6\% ) \\ \(\mathcal{D}^{2}_{OOD}\) & 53.72 (26.3\% ) & 62.59 (24.8\% ) & 64.93 (24.2\% ) \\ \(\mathcal{D}^{3}_{OOD}\) & 23.78 (67.4\% ) & 32.64 (60.8\% ) & 36.33 (57.6\% ) \\ \(\mathcal{D}^{4}_{OOD}\) & 30.72 (57.9\% ) & 43.67 (47.3\% ) & 47.89 (44\% ) \\ \(\mathcal{D}^{5}_{OOD}\) & 37.54 (48.6\% ) & 49.53 (40.6\% ) & 53.22 (47.9\% ) \\ \hline \hline \end{tabular}
\end{table}
Table 2. API call prediction results in zero-shot using \(\mathcal{M}_{dec}\).
Our results demonstrate that the model \(\mathcal{M}_{dec}\) (without fine-tuning) is unable to generalize to OOD data while showing strong performance on ID data. Our findings also support the validity of our OOD dataset as a realistic and meaningful test of the model's ability to adapt to new data in a continuous environment.
### Do models forget about past data using classical transfer learning?
In this section, we evaluate how classical transfer learning, _i.e._, using fine-tuning as in prior work, performs in the continual learning scenario. We fine-tune the models \(\mathcal{M}_{dec}\) and \(\mathcal{M}_{enc}\) sequentially on the stream of OOD datasets \(\mathcal{D}^{1}_{OOD},...,\mathcal{D}^{5}_{OOD}\). We refer to this approach as "naive fine-tuning", a common term used in the continual learning literature to refer to classical transfer learning, as it does not utilize mechanisms to address catastrophic forgetting. We report the results in terms of _Pass@1_ for API call prediction and _Accuracy (EM)_ for API usage prediction. Fig. 4 illustrates the evolution of the _Pass@1_ and _EM_ metrics on the OOD test sets throughout the fine-tuning steps for both models. Each column of a heatmap refers to the evolution of the performance of the model on a particular test set, and each row refers to a new incremental fine-tuning step. Note that we do not compute the metric on a test set whose corresponding training set has not been seen yet by the model. To quantify catastrophic forgetting, we report the Forgetting (\(F\)) metrics of the _Pass@1_ and _EM_ metrics in Table 4. We do not report all the values for every previously introduced metric as we have a strict page limit, and report them in our replication package.
in Section 3.2 that \(\mathcal{M}_{enc}\) may be less stable than \(\mathcal{M}_{dec}\) due to the additional language modeling head randomly initialized.
_Forgetting metrics_. In Table 4, we calculate the Forgetting metric for the _Pass@1_ and \(EM\) metrics and for both models. Note that we calculate the \(F\) metric at the final time step of the continual fine-tuning. According to the heatmaps of Fig. 4, the \(F^{5}\) metric of a domain is the difference between the first and last value of its corresponding column. This difference represents the amount of forgetting that has occurred on each OOD domain during fine-tuning. The \(\Delta t\) in the table indicates how recently the model was fine-tuned on a particular domain dataset. We notice that for the decoder \(\mathcal{M}_{dec}\), the forgetting is less severe for the _Pass@1_ (used in the API call prediction) than for the \(EM\) (used in the API usage prediction). The difference can be attributed to the fact that the API call prediction task is substantially easier than the API usage prediction task. In general, we observe more severe forgetting for the encoder, which further confirms our intuition about the lack of stability of \(\mathcal{M}_{enc}\).
Our results and observations illustrate that the problem of forgetting about past data is a major issue for both studied models and significantly more severe for the model \(\mathcal{M}_{enc}\). Even with a low number of fine-tuning steps, catastrophic forgetting is already prominent. By considering more fine-tuning steps, we can expect the problem to exacerbate.
We conclude that classical transfer learning, the most commonly used fine-tuning method in prior work, is not sufficient and robust enough to allow the model to adapt to new data while retaining knowledge of past data.
### How do continual learning approaches compare to classical transfer learning?
To tackle the problem of catastrophic forgetting highlighted in our previous experiments, we propose to leverage some commonly used continual learning approaches from the literature. In this experiment, the naive fine-tuning approach is the lower-bound baseline, as it has no designed mechanism to mitigate catastrophic forgetting. We begin by introducing an upper-bound approach, referred to as "cumulative fine-tuning", which involves storing all training samples from each OOD training set cumulatively. With this approach, we perform continual fine-tuning using all samples from previous fine-tuning steps in addition to the current ones. This approach is usually upper-bound in continual learning settings as by storing all samples from previous data, the model can optimize its learning to generalize better to the whole stream of data. However, the cumulative fine-tuning approach is not usable in practice for a couple of reasons: (1) we may not always have access to all previous data at any given time, and (2) it requires storing all previous samples and significantly more computations during fine-tuning. This upper-bound approach aims to minimize forgetting while achieving the best overall performance. We compare the cumulative and naive approaches in Fig. 5 and Fig. 6. Next, we introduce additional CL methods, including a replay-based method and three regularization-based methods: EWC [31], SI [66], and RWalk [9]. One advantage of these three methods over the replay method is that they do not require storing samples from previous data while fine-tuning. We report the Average (\(A\)) and Forgetting (\(F\)) metrics for both tasks and models on the _Pass@1_ and \(EM\) metrics in Table 5 and Table 6. Note that there is no Forgetting metric for Guava as it is the last domain the PLMs are fine-tuned on.
_Fine-tuning details_. We use the same fine-tuning procedure as in the previous experiment. For the replay baseline, we set the buffer size to 200, _i.e.,_ number of sampled stored from past OOD training sets. We provide all our hyperparameters and further details about the implementations in our replication package.
_Cumulative fine-tuning_. In Fig. 5, we compare the naive and cumulative approaches for the API call prediction task (_Pass@1_) on both decoder and encoder models. Each curve illustrates the evolution of the _Pass@1_ on a particular OOD test set. The figure further demonstrates how the naive approach (bottom-left part of the figure) with the encoder leads to significantly more forgetting than for the decoder, as previously discussed. At the left of Fig. 5, we observe that the cumulative fine-tuning approach effectively eliminates the catastrophic forgetting issue for both models. Specifically, the _Pass@1_ does not decrease over time and even increases
Figure 5. Comparison of naive and cumulative fine-tuning settings for both models on API call prediction.
Figure 6. Comparison of naive and cumulative fine-tuning settings for both models on API usage prediction.
throughout the fine-tuning, indicating improvement during continual fine-tuning, also known as positive transfer. In Fig. 6, we make the same observations for the API usage prediction task (_EM_).
Continual learning approachesTable 5 reports the Average and Forgetting metrics of the _Pass@1_ on each OOD test set for \(\mathcal{M}_{dec}\) and \(\mathcal{M}_{enc}\), with the naive fine-tuning approach as baseline. Similarly to Section 4.2, we compute the \(F\) metric at the end of the continual fine-tuning. Firstly, we observe that for both models, the cumulative fine-tuning approach is the best option to mitigate catastrophic forgetting and generally leads to the best \(A_{Pass@1}\). With the cumulative approach, the \(F^{5}_{Pass@1}\) metric is always negative, which indicates a positive transfer (an increase in the _Pass@1_). For instance, we get \(-8.02\) in \(F^{5}_{Pass@1}\) for \(\mathcal{M}_{dec}\) in the Security domain, _i.e._, an increase of +8.02 in the metric through fine-tuning. However, we observe large gaps between the \(A_{Pass@1}\) obtained using the cumulative approach and the naive approach on the Guava dataset (last fine-tuning step). We hypothesize that with an ever-increasing replay buffer, the models can no longer learn from new data and thus lose their ability to adapt with time. In addition to being computationally intensive, the cumulative fine-tuning approach is not scalable and robust, as previously mentioned. Overall, all other CL approaches, except EWC, greatly reduce forgetting and show a superior average _Pass@1_ compared to the naive approach. The Replay approach generally produces the best or second best \(A_{Pass@1}\). Without the cumulative approach, RWalk is the best method to mitigate forgetting for \(\mathcal{M}_{dec}\), whereas SI is better for \(\mathcal{M}_{enc}\). In Table 6, we report the results for the API usage prediction task. We observe similar trends, except that the Replay approach is less effective for both models. However, RWalk and SI are the best methods for \(\mathcal{M}_{dec}\) and \(\mathcal{M}_{enc}\), respectively.
In this final experiment, we demonstrate that continual learning methods, including two replay-based methods (Replay and Cumulative) and two regularization-based methods (SI and RWalk) effectively reduces catastrophic forgetting while achieving similar or superior effectiveness compared to classical transfer learning on both tasks.
## 5. Discussion
In this section, we address some threats to the validity of our study. We then discuss the broader impact of our study and various opportunities for future work.
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline & & \multicolumn{2}{c}{General} & \multicolumn{2}{c}{Security} & \multicolumn{2}{c}{Android} & \multicolumn{2}{c}{Web} & \multicolumn{2}{c}{Guava} \\ \cline{3-11} Model & Method & \(\mathcal{A}_{EM}\uparrow\) & \(F^{5}_{EM}\downarrow\) & \(\mathcal{A}_{EM}\) & \(F^{5}_{EM}\) & \(\mathcal{A}_{EM}\) & \(F^{5}_{EM}\) & \(\mathcal{A}_{EM}\) & \(F^{5}_{EM}\) & \(\mathcal{A}_{EM}\) & \(F^{5}_{EM}\) \\ \hline \multirow{8}{*}{\(\mathcal{M}_{dec}\)} & Naive & 37.32 & 13.00 & 44.96 & 13.55 & 32.31 & 10.68 & 41.90 & 5.09 & 44.87 & – \\ & EWC [(31)] & 36.88 & 12.95 & 44.84 & 13.08 & 33.92 & 9.46 & 39.00 & 6.73 & 45.71 & – \\ & SI [(66)] & 40.36 & 8.26 & 49.88 & 6.89 & 30.01 & 3.24 & 36.95 & 16.65 & 43.14 & – \\ & RWalk [(9)] & 40.43 & 6.23 & 47.11 & 40.4 & 33.34 & 2.63 & 36.54 & 2.13 & 41.22 & – \\ & Replay & 39.49 & 11.11 & 46.88 & 8.21 & 33.39 & 7.63 & 39.49 & 6.08 & 43.65 & – \\ & Cumulative & **43.28** & **2.02** & **47.26** & **13.33** & **56.09** & **-2.28** & 27.92 & **-4.59** & 31.35 & – \\ \hline \hline \multirow{8}{*}{\(\mathcal{M}_{enc}\)} & Naive & 21.41 & 11.80 & 24.09 & 22.74 & 19.30 & 11.91 & 26.32 & 7.23 & 25.71 & – \\ & EWC [(31)] & 21.32 & 11.53 & 26.36 & 21.02 & 19.43 & 11.96 & 25.74 & 8.38 & 28.74 & – \\ \cline{1-1} & SI [(66)] & 27.22 & 50.08 & 30.85 & 8.28 & 18.57 & 22.00 & 23.03 & 16.5 & 21.26 & – \\ \cline{1-1} & RWalk [(9)] & 25.21 & 8.80 & 29.25 & 12.23 & 19.10 & 7.62 & 25.00 & 4.28 & 24.23 & – \\ \cline{1-1} & Replay & 25.48 & 13.54 & 29.94 & 13.96 & 18.09 & 11.88 & 24.51 & 5.92 & 26.48 & – \\ \cline{1-1} & Cumulative & **30.50** & **35.89** & **6.88** & **24.81** & **-4.88** & 21.88 & **-1.97** & 18.43 & – \\ \hline \hline \end{tabular}
\end{table}
Table 6. Continual learning approaches results for API usage prediction using the Accuracy (EM) metric.
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline & & \multicolumn{2}{c}{General} & \multicolumn{2}{c}{Security} & \multicolumn{2}{c}{Android} & \multicolumn{2}{c}{Web} & \multicolumn{2}{c}{Guava} \\ \cline{3-11} Model & Method & \(\mathcal{A}_{\text{Pnas1}}\uparrow\) & \(F^{5}_{\text{Pnas1}}\downarrow\) & \(\mathcal{A}_{\text{Pnas1}}\uparrow\) & \(\mathcal{B}^{5}_{\text{Pnas1}}\downarrow\) & \(\mathcal{A}_{\text{Pnas1}}\uparrow\) & \(\mathcal{B}^{5}_{\text{Pnas1}}\downarrow\) & \(\mathcal{A}_{\text{Pnas1}}\uparrow\) & \(\mathcal{B}^{5}_{\text{Pnas1}}\downarrow\) & \(\mathcal{A}_{\text{Pnas1}}\uparrow\) & \(\mathcal{B}^{5}_{\text{Pnas1}}\downarrow\) \\ \hline \multirow{8}{*}{\(\mathcal{M}_{dec}\)} & Naive & 53.49 & 5.64 & 57.21 & 6.71 & 32.75 & 6.77 & 40.06 & 1.80 & 50.47 & – \\ & EWC [(31)] & 53.22 & 7.02 & 57.16 & 7.49 & 33.73 & 5.72 & 60.14 & 3.77 & 49.59 & – \\ & SI [(66)] & 54.65 & 3.57 & **59.26** & 3.45 & 34.04 & 2.39 & 38.93 & 13.60 & 48.16 & – \\ & RWalk [(9)] & 54.38 & 2.39 & 57.39 & 2.80 & 31.64 & 1.97 & 38.19 & 1.65 & 45.28 & – \\ & Replay & **35.66** & 4.41 & 58.87 & 2.98 & 34.60 & 2.01 & **11.12** & 2.41 & 80.72 & – \\ & Cumulative & 55.63 & **0.31** & 58.44 & **8.02** & **35.74** & **9.73** & 32.99 & **-3.01** & 42.79 & – \\ \hline \hline \multirow{8}{*}{\(\mathcal{M}_{enc}\)} & Naive & 38.78 & 10.99 & 40.49 & 23.38 & 24.01 & 11.15 & 30.05 & 10.99 & 38.55 & – \\ & EWC [(31)] & 39.38 & 9.84 & 44.10 & 22.15 & 23.93 & 10.58 & 29.22 & 7.53 & 40.66 & – \\ \cline{1-1} & SI [(66)] & 44.29 & 5.94 & 50.05 & 13.10 & 21.39 & 60.22 & 27.79 & 25.05 & 35.87 & – \\ \cline{1-1} & RWalk [(9)] & 43.42 & 6.07 & 48.05 & 14.74 & 22.23 & 7.10 & 29.75 & 4.37 & 36.10 & – \\ \cline{1-1} & Replay & 45.15 & 5.48 & 51.56 & 10.56 & 24.31 & 8.27 & 28.58 & 3.92 & 40.22 & – \\ \cline{1-1} & Cumulative & **48.06** & **0.92** & **56.04** & **34.35** & **
### Threats to validity
_Threats to external validity_. We identified a main threat regarding the monolingual aspect of our dataset. Our OOD scenario requires extracting API usage sequences from the source code. Therefore, integrating more programming languages demands substantial additional effort, which we deliberately leave for future work. In addition, the construction of our dataset does not include any programming language-specific design and avoids any data leakage between the ID and OOD data. Consequently, it is highly likely that our results are not affected by the programming language of the data.
Another threat related to the data is the choice of the OOD domains and APIs. To mitigate this threat, we selected five domains covering different types of programs. Specifically, we selected 10 random interfaces per domain. Our results show that catastrophic forgetting is observed consistently for all domains, and the selection of different interfaces would result in different intensities in forgetting. We leave the study of this qualitative aspect for future work.
The choice of the downstream tasks presents another external threat to validity of our study. We employed two generation tasks, API call and API usage prediction. We focus on APIs-related tasks because APIs are an important part of the distribution of code tokens in programs and give lots of information about the semantics of programs. We observe significant catastrophic forgetting in these two API-related tasks and hypothesize that catastrophic forgetting could appear in other SE tasks because of the importance of APIs in code. For instance, previous work found that APIs play important roles in writing the summarization of code (Krishnam et al., 2017), detecting code clones (Krishnam et al., 2017), retrieving code given a query (Krishnam et al., 2017), etc. We leave the investigation of the OOD phenomenon in other tasks as future work.
We identified an external threat to validity related to the limited number of fine-tuning steps in our continual fine-tuning settings. In practice, a PLM deployed to a real production environment would potentially face a larger number of fine-tuning steps throughout its lifetime. In this paper, we showed that both PLMs suffer from severe catastrophic forgetting, although we only consider five fine-tuning steps. We also demonstrated that more steps generally result in more forgetting about past data.
Finally, the selection of the size of the PLMs, in terms of the number of trainable parameters, constitutes a potential threat to the validity of our study. While increasing the number of parameters may still result in OOD generalization issues due to the design of our datasets, it is uncertain whether catastrophic forgetting would occur with the same magnitude for larger models. Our experiments were performed under limited computational resources, which required us to consider architectures with a limited number of parameters. To mitigate this threat, we maximized the size of the models considering our limited resources. We pre-train PLMs with 110M and 125M parameters which are within the range of PLMs such as CodeBERT (Dosov et al., 2017), CodeT5 (Zhu et al., 2018) or CodeGPT (Zhu et al., 2018).
_Threats to internal validity_. The hyperparameter choices for our CL approaches constitute the main threat to internal validity. We selected our hyperparameters based on values used in prior works about continual learning (Krishnam et al., 2017; Krishnam et al., 2017; Krishnam et al., 2018; Krishnam et al., 2018). These hyperparameters can be optimized for our scenario by using search methods, which tend to have a high computational cost. However, this aspect is not critical to the study as we have already shown the advantages of incorporating continual learning techniques with reasonable hyperparameter values.
_Threats to construct validity_. We identified one threat to construct validity related to the choice of our evaluation metrics. We mitigate this threat by selecting metrics widely used in prior works to evaluate code generation tasks (Zhu et al., 2018; Krishnam et al., 2018). Additionally, we adapted continual learning metrics from prior works (Krishnam et al., 2017; Krishnam et al., 2018) to evaluate our continual fine-tuning scenario.
### Broader impact and opportunities
Our study sheds light on the performance of PLMs of code in a continual learning setting for out-of-distribution generalization. We believe that this initial exploration of continual learning for code (_CL4Code_) will inspire further investigation in this important area. Our findings highlight two potential areas for future research: improving dataset and benchmark creation, and expanding the application of CL4Code to a wider range of use cases.
_Datasets and benchmarks_. Our findings in Section 4.1 highlight a substantial disparity in the performance of a PLM between ID and OOD data. Our results, along with a previous work (Zhu et al., 2018), indicate that evaluating PLMs on ID data often leads to inflated metrics and results in overly optimistic conclusions in terms of the performance. Therefore, it is crucial to develop OOD datasets for code in order to evaluate the real-world generalizability of PLMs, as previously emphasized (Zhu et al., 2018; Zhu et al., 2018). Moreover, aligning dataset designs with continual learning scenarios offers the potential to evaluate the PLM's ability to adapt to changing environments, which is crucial for practical deployment.
Improving benchmarks for PLMs of code is another promising direction for future research. Benchmarks such as CodeXGlue (Zhu et al., 2018) play a crucial role by providing standardized evaluations of models of code and enabling reproducible experimental results. However, as such researches progress at a rapid pace, widely used benchmarks often become outdated quickly. In particular, Kiela et al. (2018) showed that benchmarks such as GLUE (Zhu et al., 2018) in NLP saturate, meaning the milestones set by the benchmark are reached. Thus, continued efforts to enhance benchmarks in deep learning for code are vital in establishing concrete goals and driving research to enhance the performance of the models being evaluated. Recently, Yang et al. (2018) proposed GLUE-X, a comprehensive benchmark consisting of 13 datasets to test PLMs on OOD data across eight NLP tasks. The benchmark includes OOD datasets that are distinct from those in the original GLUE benchmark. Developing OOD benchmarks for code similar to GLUE-X (Zhu et al., 2018) would greatly contribute to the growth of research on OOD generalization for PLMs of code. One potential approach is to compile a new set of OOD datasets that are not included in the existing CodeXGlue benchmark, and use them to test PLMs of code. Furthermore, exploring the design of OOD scenarios specific to software changes, as demonstrated in the present study, can provide a valuable foundation for future code benchmark initiatives. Our dataset and methodology for extracting
OOD samples for API evolution scenarios can serve as a starting point for these endeavors.
_Continual learning for code_. Our findings in Section 4.2 highlight the challenge of catastrophic forgetting that PLMs of code encounter in a continual fine-tuning scenario with OOD data. Our study serves as a starting point for exploring the adaptability of PLMs of code in a variety of continual learning scenarios. For instance, these scenarios can be based on domain adaptation, where PLMs must adapt to new kinds of data such as new, unseen programming languages or code repositories as discussed in prior studies (Zang et al., 2019; Zhang et al., 2020; Zhang et al., 2021). Additionally, incorporating continual learning into a multi-task learning framework is highly relevant to software engineering, given the multitude of downstream tasks involved.
In Section 4.3, our results demonstrate the effectiveness of continual learning methods in mitigating catastrophic forgetting in PLMs of code. We chose to explore these widely used methods as a first step in the research on continual learning for code. In the future, more sophisticated techniques from NLP, as discussed in Section 6.2, can be evaluated. Furthermore, the creation of continual learning methods specifically tailored to source code has the potential to further reduce catastrophic forgetting in PLMs of code.
## 6. Related Work
### Out-of-distribution generalization
_Natural language processing_. Recent studies have revealed that PLMs are susceptible to generating inaccurate predictions when encountering OOD data (Zang et al., 2019; Zhang et al., 2020). In NLP, this issue can manifest itself in situations where the domain of the test data differs from the pre-training data (Zang et al., 2019). One approach to addressing this problem is to fine-tune PLMs on domain-specific datasets using efficient transfer learning techniques. For example, (Zang et al., 2019; Zhang et al., 2020) demonstrated that such approaches help PLMs in learning domain-specific knowledge and improve their generalization to unseen domains. Additionally, new datasets and benchmarks allow for further research on PLM domain adaptation. For instance, Williams et al. (Williams et al., 2020) introduced the MultiNLI dataset, containing text data from a variety of domains for PLM domain adaptation. Conneau et al. (Conneau et al., 2017) proposed a cross-lingual NLI dataset for evaluating the cross-lingual transferability of PLMs. Recently, Yang et al. (Yang et al., 2020) introduced GLUE-X, a benchmark for evaluating PLMs' ability to generalize to OOD data.
_Deep learning for code_. The study of OOD generalization of PLMs of code is an emerging research area. Assessing their generalizability and designing efficient techniques to improve their robustness to OOD scenarios is essential for the practical usability of PLMs of code (Zang et al., 2020). Previous work in this field has focused on designing OOD datasets that simulate specific distribution shifts of program data. Koh et al. (Koh et al., 2020) presented PY150-Wilds, a Python dataset in which the test data consists of code repositories not appearing in the training data. The authors demonstrated performance gaps between the model on ID and OOD data. However, it is important to note that while the design choice is sound, it may not reflect strong OOD phenomena as the distribution of code tokens across different repositories may still be highly similar. More recently, Hu et al. (Hu et al., 2020) proposed a benchmark to evaluate the performance of code models under different distribution shift scenarios, including programmer, time, or token distribution shifts. In their study, the authors found that PLMs such as CodeBERT were robust against distribution shifts. However, they demonstrated that on a simple classification task with small datasets. In addition, the authors did not control the pre-training data of the studied PLMs, which can result in important data leakage between the pre-training and OOD test data. This problem of data leakage is critical as some of the test data may have been seen by the model during pre-training. Overall, this is a prime threat to the validity of the OOD scenario that may lead to obtaining inflated metrics on the OOD test data. Finally, Hajipour et al. (Hajipour et al., 2020) analyzed the performance of PLMs of code on a syntax-based, semantic-based and complexity-based OOD scenario and highlighted that the models exhibit poor generalizability when faced with OOD samples. However, it is important to point out that the OOD scenarios used in this study may be too artificial. For instance, in the syntax-based scenario, some language-specific tokens are masked at training to study how the model generalizes to unseen language tokens. Such a scenario is unrealistic as it does not reflect the nature of OOD data that a PLM of code is likely to encounter in the real world. Additionally, there is no practical motivation for masking specific tokens while training the model.
In this study, we propose an OOD dataset that accurately represents the dynamic nature of software codebases in the real world. Specifically, we focus on the scenario where a PLM must adapt to new, unseen APIs over time, a well-established problem in the literature (Zang et al., 2019; Zhang et al., 2020). To ensure the validity of our experiments, we thoroughly control our PLM setup to prevent any data leakage between the pre-training, fine-tuning, and test data. This allows us to create an OOD generalization scenario that is as close to reality as possible, an aspect that has been overlooked in previous works.
### Continual learning for pre-trained language models
Continual learning has been studied to adapt pre-trained language models based on the Transformer architecture (Zang et al., 2020) to new domains or tasks in NLP. For example, Cao et al. (Cao et al., 2020) proposed a method to continually learn from new classes of events in textual data to detect them without degradation of the accuracy over time. Douillard et al. (Doillard et al., 2019) introduced DyTox, a method that utilizes an encoder-decoder transformer for multiple tasks by expanding the network with task-specific special tokens, allowing for continual learning of new tasks with a low computational and memory footprint. Ermis et al. (Ermis et al., 2020) proposed a memory-efficient approach for transformers to continually learn new tasks by sharing information across tasks and expanding the network with task-specific modules. Similarly, Vladymyrov et al. (Vladymyrov et al., 2020) proposed the HyperTransformer architecture to continually learn new tasks by generating task-specific convolutional neural network weights in a few-shot learning setting and updating the task-specific weights to avoid catastrophic forgetting. Lastly, Jie et al. (Jie et al., 2020) leverage continual learning to avoid representational shifts in PLMs by proposing a new hierarchical fine-tuning method that prevents excessive changes in the representation spaces of the neural network in a continual fine-tuning setting.
Recent advances in NLP highlight the crucial need for PLMs to adapt to changing environments and maintain their performance
on new data and tasks. In the field of software engineering, the application of continual learning to PLMs of code is essential for developing methods that enable the model to robustly adapt to new codebases and tasks over time. To the best of our knowledge, there are no existing studies that employ continual learning in this context. Our work breaks new ground by introducing the first continual learning scenario for PLMs of code to continuously learn from new out-of-distribution APIs over time.
## 7. Conclusion and Future Work
Our study exposes the limitations of pre-trained language models of code in handling out-of-distribution data in a continual fine-tuning scenario. Our results reveal that OOD data significantly decreases the PLMs' effectiveness in two API-related downstream tasks compared to ID data. Our findings indicate that classical transfer learning fails to adapt the PLMs to new, unseen APIs in this evolution scenario. Additionally, we observe instances of catastrophic forgetting, prompting us to explore methods that address this issue. In our final experiments, we demonstrate that replay-based and regularization-based continual learning techniques can effectively mitigate catastrophic forgetting while retaining or enhancing the performance of the PLMs in both downstream tasks. In future work, we intend to explore more OOD scenarios to further evaluate the generalizability of PLMs of code and develop relevant OOD generalization benchmarks for code. Additionally, we plan to implement more advanced continual learning methods tailored to source code to enhance the adaptability of PLMs of code. Finally, we aim to investigate OOD detection methods to automatically identify OOD data in PLMs, thereby improving their performance.
## Data Availability
We publicly release all the code, data and models to reproduce the experiments of our study. The following repository contains instructions on how to acquire the data and pre-train, fine-tune and test the PLMs: [https://anonymous.4open.science/r/cl4code-ood-apdis-2490/](https://anonymous.4open.science/r/cl4code-ood-apdis-2490/)
| pré-entraînés modèles de langage (PLM) sont devenus une technique répandue en apprentissage profond pour le code, en utilisant une procédure de pré-entraînement et de micro-ajustement pour acquérir des connaissances générales sur le code et se spécialiser dans une variété de tâches de sous-production. Cependant, la nature dynamique des bases de données de code pose un défi à l'efficacité et à la robustesse des PLM. En particulier, les scénarios réalistes du monde peuvent mener à des différences significatives entre la distribution des données de pré-entraînement et de test, c'est-à-dire une variation de distribution, ce qui entraîne une dégradation des performances des PLM sur les tâches de sous-production. Dans ce papier, nous soulignemons la nécessité d'adapter les PLM du code aux données de logiciel dont la distribution change au fil du temps, un problème crucial qui a été négligé dans les travaux précédents. La motivation |
2307.05461 | Strictly $k$-colorable graphs | Xuding Zhu introduced a refined scale of choosability in 2020 and observed
that the four color theorem is tight on this scale. We formalize and explore
this idea of tightness in what we call strictly colorable graphs. We then
characterize all strictly colorable complete multipartite graphs. | Evan Leonard | 2023-07-11T17:48:42 | http://arxiv.org/abs/2307.05461v1 | # Strictly \(k\)-colorable graphs
###### Abstract
Zhu [7] introduced a refined scale of choosability in 2020 and observed that the four color theorem is tight on this scale. We formalize and explore this idea of tightness in what we call strictly colorable graphs. We then characterize all strictly colorable complete multipartite graphs.
## 1 Introduction
Vertex coloring is a widely studied area that comes in many variations. A proper vertex coloring of a graph \(G\) assigns colors to the vertices of \(G\) so that no two adjacent vertices have the same color. A graph is \(k\)-colorable if it can be properly colored with \(k\) colors. The chromatic number \(\chi(G)\) is the minimum \(k\) for which \(G\) is \(k\)-colorable. If \(\chi(G)=k\), we say \(G\) is \(k\)-chromatic.
List coloring (choosability) is a popular variation of proper vertex coloring introduced in 1976 by Vizing [5] and indpendently in 1979 by Erdos, Rubin, and Taylor [1]. A \(k\)-list-assignment (\(k\)-assignment) \(L\) of \(G\) assigns sets of \(k\) colors to the vertices of \(G\). \(G\) is \(L\)-colorable if \(L\) exhibits a proper coloring; that is, \(G\) can be properly colored where each vertex is assigned a color from its list in \(L\). \(G\) is \(k\)-choosable if it is \(L\)-colorable for all \(k\)-assignments \(L\). The choice number \(\operatorname{ch}(G)\) is the minimum \(k\) for which \(G\) is \(k\)-choosable.
Zhu [7] introduced another variation in 2020 which refines choosability into a hierarchy of integer partitions. In that initial paper, he uses this new system (summarized in the next section) to extend some list coloring results as well as make connections to the List Coloring Conjecture and to signed graph coloring problems.
One observation Zhu made was based on a result of Kemnitz and Voigt [6] which implies that the four color theorem is tight on his refined scale of list coloring. Without getting very technical yet, they found a planar graph which is only properly colorable with list assignments that are essentially equivalent to the setup of a normal proper vertex coloring. An example of such a list assignment would be every vertex having the same list of four colors.
This idea of tightness was not further explored. Here we formalize it as graphs which are "strictly colorable". We'll explore some general observations of the idea and ultimately characterize all strictly colorable complete multipartite graphs.
Summary of Zhu's Refinement
Zhu's refinement of choosability is built using integer partitions. An integer partition \(\lambda\) of a positive integer \(k\) is a multiset of positive integers whose sum is \(k\). For example, \(\lambda=\{1,1,2,3\}\) is an integer partition of \(k=7\). For the integer partition \(\lambda=\{k_{1},k_{2},\ldots,k_{t}\}\) of \(k\), a \(\lambda\)-assignment of a graph \(G\) is a \(k\)-assignment \(L\) of \(G\) where the colors in \(\bigcup_{v\in V(G)}L(v)\) can be partitioned into sets \(C_{1},C_{2},\ldots,C_{t}\) so that for each \(v\in V(G)\) and each \(i\in\{1,2,\ldots,t\}\), \(|L(v)\cap C_{i}|=k_{i}\). \(G\) is \(\lambda\)-choosable if every \(\lambda\)-assignment of \(G\) exhibits a proper coloring.
Let \(\lambda\) and \(\lambda^{\prime}\) be integer partitions of \(k\). We say \(\lambda^{\prime}\) is a refinement of \(\lambda\) if \(\lambda^{\prime}\) is obtained by subdividing parts of \(\lambda\); e.g. \(\{1,1,3\}\) is a refinement of \(\{2,3\}\). It follows that if \(\lambda^{\prime}\) is a refinement of \(\lambda\), then every \(\lambda^{\prime}\)-assignment of a graph \(G\) is also a \(\lambda\)-assignment of \(G\). So, every \(\lambda\)-choosable graph is \(\lambda^{\prime}\)-choosable.
A note on notation: as mentioned earlier, we'll get to results about complete multipartite graphs. It's common notation in the literature to use \(K_{a*b}\) as the complete \(b\)-partite graph with parts of size \(a\). For example \(K_{3*5}=K_{3,3,3,3,3}\) and \(K_{5*3}=K_{5,5,5}\). I introduce this now because it is also very useful for integer partitions; e.g. we say \(\lambda=\{1*4,2\}=\{1,1,1,1,2\}\) is an integer partition of \(6\). While there are other conventions to convey multiplicity in integer partitions, we'll stick with this one for the sake of consistency with complete multipartite graphs.
Zhu pointed out the trivial fact that being \(\{k\}\)-choosable is equivalent to being \(k\)-choosable. He then proved the less obvious fact that being \(\{1*k\}\)-choosable is equivalent to being \(k\)-colorable. Thus \(\lambda\)-choosability conveniently houses \(k\)-colorability and \(k\)-choosability within the same framework. The integer partitions of \(k\), which are \(\{k\}\) and all refinements down to \(\{1*k\}\), reveal a complicated hierarchy of colorability.
Refinements allow us to compare partitions of the same integer. Zhu introduced a partial ordering of integer partitions which allows us to compare partitions of different integers; it goes as follows. Let \(\lambda\) and \(\lambda^{\prime}\) be integer partitions of \(k\) and \(k^{\prime}\) respectively where \(k\leq k^{\prime}\). We say \(\lambda\leq\lambda^{\prime}\) if \(\lambda^{\prime}\) is a refinement of an integer partition \(\lambda^{\prime\prime}\) of \(k^{\prime}\) obtained from \(\lambda\) by increasing parts of \(\lambda\). For example, \(\{3,3\}\leq\{1,1,2,4\}\); to see this, use the intermediate integer partition \(\{3,5\}\). Zhu then proved this important theorem.
**Theorem 2.1** (Zhu).: _Every \(\lambda\)-choosable graph is \(\lambda^{\prime}\)-choosable if and only if \(\lambda\leq\lambda^{\prime}\)._
Continuing our example, every \(\{3,3\}\)-choosable graph is \(\{1,1,2,4\}\)-choosable.
## 3 Strictly \(k\)-Colorable Graphs
Let's now give better context to Kemnitz and Voigt's result. They showed that there are planar graphs which are not \(\{1,1,2\}\)-choosable. That is to say, by the four color theorem there are planar graphs for which \(\lambda=\{1,1,1,1\}\) is the only integer partition of \(4\) for which they are \(\lambda\)-choosable (this is what was meant
by the phrase, "essentially equivalent to the setup of a normal proper vertex coloring," in the introduction).
Again, Zhu points out that this makes the four color theorem tight on his refined scale of choosability. This idea of graphs being \(\lambda\)-choosable strictly for \(\lambda=\{1*k\}\) and no other integer partitions of \(k\) was not further explored by Zhu beyond this example. Here we formalize the idea.
**Definition 1**.: A graph \(G\) is strictly \(k\)-colorable if the only integer partition \(\lambda\) of \(k\) for which \(G\) is \(\lambda\)-choosable is \(\lambda=\{1*k\}\).
Here is the motivation behind the chosen terminology. We say "strictly \(k\)-_colorable_" because being \(\{1*k\}\)-choosable is equivalent to being \(k\)-colorable. We say "_strictly_\(k\)-colorable" because it's the only partition of \(k\) for which it is \(\lambda\)-choosable. The following observation provides a nice alternate definition of strict \(k\)-colorability.
**Observation 3.1**.: _A graph \(G\) is strictly \(k\)-colorable if and only if \(G\) is \(k\)-colorable and not \(\{1*(k-2),2\}\)-choosable._
Proof.: The forward implication follows from Definition 1. Let \(\lambda=\{1*(k-2),2\}\). Suppose \(G\) is \(k\)-colorable and not \(\lambda\)-choosable. Let \(\lambda^{\prime}\neq\{1*k\}\) be an integer partition of \(k\). Then \(\lambda\) is a refinement of \(\lambda^{\prime}\). Thus every \(\lambda\)-assignment of \(G\) is a \(\lambda^{\prime}\)-assignment of \(G\). Because \(G\) is not \(\lambda\)-choosable, \(G\) is not \(\lambda^{\prime}\)-choosable. Therefore, \(G\) is strictly \(k\)-colorable.
So to show that any given graph \(G\) is strictly \(k\)-colorable, it suffices to show that \(G\) is \(k\)-colorable and then find a \(\{1*(k-2),2\}\)-assignment for which \(G\) is not properly colorable. Here are two more interesting and helpful observations.
**Observation 3.2**.: _If \(G\) is strictly \(k\)-colorable, then \(\chi(G)=k\)._
Proof.: If \(\chi(G)>k\), then \(G\) is not \(k\)-colorable. Let \(\lambda=\{1*(k-1)\}\). Suppose \(G\) is \(\lambda\)-choosable [i.e. \(\chi(G)<k\)]. Let \(\lambda^{\prime}=\{1*(k-2),2\}\). Note \(\lambda\leq\lambda^{\prime}\). So \(G\) is \(\lambda^{\prime}\)-choosable, and therefore not strictly \(k\)-colorable.
This means there is at most one positive integer \(k\) for which a graph \(G\) can be strictly \(k\)-colorable, that is \(k=\chi(G)\). If \(G\) is not strictly \(\chi(G)\)-colorable, then \(G\) is not strictly \(k\)-colorable for any \(k\in\mathbb{N}\).
**Observation 3.3**.: _Suppose \(H\) is strictly \(k\)-colorable and \(H\subseteq G\). Then \(G\) is strictly \(k\)-colorable if and only if \(\chi(G)=k\)._
Proof.: The forward is already proven. Suppose \(\chi(G)=k\). Because \(H\) is not \(\{1*(k-2),2\}\)-choosable, neither is \(G\).
With this, if you'd like to show that some graph \(G\) is strictly \(k\)-colorable, it suffices to show that a subgraph \(H\subseteq G\) with \(\chi(H)=\chi(G)\) is strictly \(k\)-colorable. Put another way, if you show that some graph \(H\) is strictly \(k\)-colorable, then you get every \(k\)-chromatic graph containing \(H\) for free.
Some more can be said in general about the lowest values of \(k\). If \(G\) is strictly \(1\)-colorable, then \(G\) is an independent set. Because the only integer partition of \(1\) is \(\{1\}\), all independent sets are strictly \(1\)-colorable. If \(G\) is strictly \(2\)-colorable, then \(G\) is bipartite. The only two integer partitions of \(2\) are \(\{1,1\}\) and \(\{2\}\). Thus, a bipartite graph is strictly \(2\)-colorable if and only if it is not \(2\)-choosable. Erdos, Rubin, and Taylor [1] characterized all \(2\)-choosable graphs, so all strictly \(2\)-colorable graphs are characterized. As with many problems, the fun starts with \(k\geq 3\), so from here on, we will consider strict \(k\)-colorability only for \(k\geq 3\).
## 4 Strictly \(k\)-Colorable Complete \(k\)-Partite Graphs
Complete multipartite graphs are typically of interest when studying choosability. Their nice structure can lend to tidy results. Erdos, Rubin, and Taylor [1] proved that \(\operatorname{ch}(K_{2*k})=k\). \(K_{2*k}\) is called a chromatic-choosable graph since \(\chi(K_{2*k})=\operatorname{ch}(K_{2*k})\). This, of course, disqualifies it from being strictly \(k\)-colorable. In this context, strictly \(k\)-colorable graphs can be thought of as one end of the refinement spectrum and chromatic-choosable graphs as the opposite end.
Kierstead proved in 2000 [3] that \(\operatorname{ch}(K_{3*k})=\lceil(4k-1)/3\rceil\) and proved with Salmon and Wang in 2016 [4] that \(\operatorname{ch}(K_{4*k})=\lceil(3k-1)/2\rceil\). These graphs with tidy choice numbers are not chromatic-choosable. Might they be strictly \(k\)-colorable? Yes.
**Lemma 4.1**.: _Let \(k\geq 3\). \(K_{3*k}\) is strictly \(k\)-colorable._
Proof.: \(K_{3*k}\) is certainly \(k\)-colorable. Let \(\lambda_{k}=\{1*(k-2),2\}\). It suffices to show that \(K_{3*k}\) is not \(\lambda_{k}\)-choosable. Let \(V_{1},V_{2},\ldots,V_{k}\) be the partite sets of \(K_{3*k}\). Define \(L_{k}\) to be the following \(k\)-assignment:
\[L_{k}(V_{1}) L_{k}(V_{2}) L_{k}(V_{k})\] \[\{0,1\}\cup A \{0,1\}\cup A \{0,1\}\cup A \{0,1\}\cup A\] \[\{0,2\}\cup A \{0,2\}\cup A \cdots \{0,2\}\cup A\] \[\{1,2\}\cup A \{1,2\}\cup A \{1,2\}\cup A \{1,2\}\cup A\]
Where \(A=\{3,\ldots,k\}\). Note the dots \((\dots)\) mean count up by \(1\) between \(3\) and \(k\). For example, if \(k=7\), the second vertex in \(V_{1}\) has the list assignment \(\{0,2,3,4,5,6,7\}\). Let \(C_{1}=\{0,1,2\}\) and \(C_{i}=\{i+1\}\) for \(2\leq i\leq k-1\). Then \(|L_{k}(v)\cap C_{1}|=2\) and \(|L_{k}(v)\cap C_{i}|=1\) for all \(v\in V(K_{3*k})\) and \(2\leq i\leq k-1\). Thus, \(L_{k}\) is a \(\lambda_{k}\)-assignment to \(K_{3*k}\).
There are \(k\) partite sets and \(k-1\) color groups. For \(2\leq i\leq k-1\), the colors of \(C_{i}\) can appear on at most \(1\) partite set of \(K_{3*k}\). The colors of \(C_{1}\) cannot fully color \(2\) partite sets. Hence, all the color groups together can fully color at most \(k-1\) partite sets simultaneously. So, \(K_{3*k}\) is not \(L_{k}\)-colorable, and therefore not \(\lambda_{k}\)-choosable. Therefore, \(K_{3*k}\) is strictly \(k\)-colorable.
By Observation 3.3, so is \(K_{4*k}\). It's worth noting that this lemma is also true for \(k=1,2\), but since independent sets and bipartite graphs are solved, we only care for \(k\geq 3\).
It turns out that we can characterize all strictly \(k\)-colorable complete \(k\)-partite graphs. The strategy is to find a set of subgraphs which are strictly \(k\)-colorable. Then we will show it is necessary and sufficient for any strictly \(k\)-colorable complete \(k\)-partite graph to contain one of these subgraphs. There are three such subgraphs in total. Our first is the previously mentioned \(K_{3*k}\).
To introduce the remaining two, we'll make use of a result by Hoffman and Johnson [2]. They showed that there is a unique uncolorable \(m\)-assignment (up to relabeling) of \(K_{m,n}\) when \(n=m^{m}\). For \(K_{2,4}\) with partite sets \(V_{1}\) and \(V_{2}\), that unique assignment is \(L(V_{1})=\{\{1,2\},\{3,4\}\},L(V_{2})=\{\{1,3\},\{1,4\},\{2,3\},\{2,4\}\}\). We'll call this the "unique bad 2-assignment of \(K_{2,4}\)."
**Lemma 4.2**.: _Let \(k\geq 3\). \(K_{2,4,6*(k-2)}\) is strictly \(k\)-colorable._
Proof.: Let \(G_{k}=K_{2,4,6*(k-2)}\). \(G_{k}\) is certainly \(k\)-colorable. Let \(\lambda_{k}=\{1*(k-2),2\}\). It suffices to show that \(G_{k}\) is not \(\lambda_{k}\)-choosable. Let \(V_{1},V_{2},\ldots,V_{k}\) be the partite sets of \(G_{k}\) such that \(|V_{1}|=2\), \(|V_{2}|=4\), and \(|V_{i}|=6\) for \(3\leq i\leq k\). Define \(L_{k}\) to be the following \(k\)-assignment on \(G_{k}\):
\[\begin{array}{llll}L_{k}(V_{1})&L_{k}(V_{2})&L_{k}(V_{3})&L_{k}(V_{k})\\ \{1,2\}\cup A&\{1,3\}\cup A&\{1,3\}\cup A&\{1,3\}\cup A\\ \{3,4\}\cup A&\{1,4\}\cup A&\{1,4\}\cup A&\{1,4\}\cup A\\ &\{2,3\}\cup A&\{2,3\}\cup A&\cdots&\{2,3\}\cup A\\ &\{2,4\}\cup A&\{2,4\}\cup A&\{2,4\}\cup A\\ &\{1,2\}\cup A&\{1,2\}\cup A\\ &\{3,4\}\cup A&\{3,4\}\cup A\\ \end{array}\]
Where \(A=\{5,\ldots,k+2\}\). Let \(C_{1}=\{1,2,3,4\}\) and let \(C_{i}=\{i+3\}\) for \(2\leq i\leq k-1\). Then \(|L_{k}(v)\cap C_{1}|=2\) and \(|L_{k}(v)\cap C_{i}|=1\) for all \(v\in V(G_{k})\) and \(2\leq i\leq k-1\). Thus, \(L_{k}\) is a \(\lambda_{k}\)-assignment to \(G_{k}\).
Notice there are \(k\) partite sets and \(k-1\) color groups. In a proper \(L_{k}\)-coloring of \(G_{k}\), each color group \(C_{i}\) where \(2\leq i\leq k-1\) can be seen on at most one partite set. This means in a proper \(L_{k}\)-coloring of \(G_{k}\), at least two partite sets must only see colors from \(C_{1}\). But between every pair of partite sets, their colors from \(C_{1}\) contains the unique bad 2-assignment of \(K_{2,4}\). Thus you can't completely color any pair of partite sets using only \(C_{1}\), so \(G_{k}\) is not \(L_{k}\)-colorable. Hence, \(G_{k}\) is not \(\lambda_{k}\)-choosable. Therefore, \(G_{k}\) is strictly \(k\)-colorable.
**Lemma 4.3**.: _Let \(k\geq 3\). \(K_{2,5*(k-1)}\) is strictly \(k\)-colorable._
Proof.: Let \(G_{k}=K_{2,5*(k-1)}\). \(G_{k}\) is certainly \(k\)-colorable. Let \(\lambda_{k}=\{1*(k-2),2\}\). It suffices to show that \(G_{k}\) is not \(\lambda_{k}\)-choosable. Let \(V_{1},V_{2},\ldots,V_{k}\) be the partite sets of \(G_{k}\) such that \(|V_{1}|=2\), and \(|V_{i}|=5\) for \(2\leq i\leq k\). Define \(L_{k}\) to be the
following \(k\)-assignment on \(G_{k}\):
\[\begin{array}{llll}L_{k}(V_{1})&L_{k}(V_{2})&L_{k}(V_{k})\\ \{1,2\}\cup A&\{1,3\}\cup A&\{1,3\}\cup A\\ \{3,4\}\cup A&\{1,4\}\cup A&\{1,4\}\cup A\\ &\{2,3\}\cup A&\cdots&\{2,3\}\cup A\\ &\{2,4\}\cup A&\{2,4\}\cup A\\ &\{1,2\}\cup A&\{1,2\}\cup A\end{array}\]
Where \(A=\{5,\ldots,k+2\}\). Let \(C_{1}=\{1,2,3,4\}\) and let \(C_{i}=\{i+3\}\) for \(2\leq i\leq k-1\). Then \(|L_{k}(v)\cap C_{1}|=2\) and \(|L_{k}(v)\cap C_{i}|=1\) for all \(v\in V(G_{k})\) and \(2\leq i\leq k-1\). Thus, \(L_{k}\) is a \(\lambda_{k}\)-assignment to \(G_{k}\).
Just as in the previous proofs, there are \(k\) partite sets and \(k-1\) color groups. The color groups of size \(1\) can together color at most \(k-2\) partite sets, leaving at least \(2\) partite sets left to be colored by \(C_{1}\). However, notice between any pair of partite sets, it is impossible for \(C_{1}\) to be used alone. Therefore, \(G_{k}\) is strictly \(k\)-colorable.
We'll use these three strictly \(k\)-colorable complete \(k\)-partite graphs to characterize all such graphs. Before getting to it, we'll make use of another way that colorability can be thought of in terms of integer partitions and how that relates to our current notion of \(\lambda\)-choosability.
**Definition 2**.: Let \(\lambda=\{k_{1},k_{2},\ldots,k_{t}\}\) be an integer partition of \(k\). A graph \(G\) is \(\lambda\)-partitionable if there exists a partition \(V_{1},V_{2},\ldots,V_{t}\) of the vertex set \(V(G)\) such that \(G[V_{i}]\) is \(k_{i}\)-choosable for \(1\leq i\leq t\). Such a partition of \(V(G)\) is called a \(\lambda\)-partition.
This idea of distinguishing \(\lambda\)-partitionability from \(\lambda\)-choosability was introduced to me by Greg Puleo in unpublished work he did along with Dan Cranston. Here is an observation of how the two ideas compare.
**Observation 4.4**.: _If \(G\) is \(\lambda\)-partitionable, then \(G\) is \(\lambda\)-choosable._
Proof.: Let \(\lambda=\{k_{1},\ldots,k_{t}\}\) and let \(V_{1},\ldots,V_{t}\) be a \(\lambda\)-partition of \(V(G)\). Let \(L\) be a \(\lambda\)-assignment of \(G\) with color groups \(C_{1},\ldots,C_{t}\). Because \(G[V_{i}]\) is \(k_{i}\)-choosable, the vertices of \(V_{i}\) can be properly colored with the colors assigned to it from \(C_{i}\). Because all \(C_{i}\) are disjoint, \(G\) is \(L\)-colorable and therefore \(\lambda\)-choosable since \(L\) is an arbitrary \(\lambda\)-assignment.
**Corollary 4.4.1**.: _If \(G\) is \(\{1*(k-2),2\}\)-partitionable, then \(G\) is not strictly \(k\)-colorable._
It's nice to note that this characterizes complete graphs for free.
**Corollary 4.4.2**.: _For \(n\geq 2\), \(K_{n}\) is not strictly \(n\)-colorable._
This idea of \(\lambda\)-partitionability is a quick way to show certain graphs are not strictly \(k\)-colorable. This will be utilized for our final theorem.
**Theorem 4.5**.: _Let \(k\geq 3\) and \(G_{k}\) be a complete \(k\)-partite graph. \(G_{k}\) is strictly \(k\)-colorable if and only if \(G_{k}\) contains at least one of \(K_{3*k}\), \(K_{2,4,6*(k-2)}\), or \(K_{2,5*(k-1)}\) as a subgraph._
Proof.: The backwards implication follows from Observation 3.3 and Lemmas 4.1, 4.2, and 4.3.
Let \(G_{k}=K_{a_{1},a_{2},\ldots,a_{k}}\) such that \(a_{1}\leq a_{2}\leq\cdots\leq a_{k}\). Let \(V_{1},\ldots,V_{k}\) be the partite sets of \(G_{k}\) such that \(|V_{i}|=a_{i}\) for \(1\leq i\leq k\). Let \(\lambda_{k}=\{1*(k-2),2\}\). Suppose \(G_{k}\) contains none of \(K_{3*k}\), \(K_{2,4,6*(k-2)}\), and \(K_{2,5*(k-1)}\) as a subgraph. Then we have the following two cases:
**Case 1:**\(a_{1}=1\), or \(a_{1}=2\) and \(a_{2}\leq 3\). In this case, \(G_{k}[V_{1}\cup V_{2}]\) is 2-choosable. So \(G_{k}\) is \(\lambda_{k}\)-partitionable, and hence not strictly \(k\)-colorable.
**Case 2:**\(a_{1}=2\), \(a_{2}=4\), and \(a_{3}\leq 5\). It suffices to only consider \(a_{3}=5\). Let \(L_{k}\) be a \(\lambda_{k}\)-assignment of \(G_{k}\) with color groups \(C_{1},C_{2},\ldots,C_{k-1}\) such that \(|L_{k}(v)\cap C_{1}|=2\) and \(|L_{k}(v)\cap C_{i}|=1\) for all \(v\in V(G_{k})\) and for \(2\leq i\leq(k-1)\). Color \(V_{i}\) with its colors from \(C_{i-1}\) for \(3\leq i\leq k\). If \(V_{1}\) and \(V_{2}\) can be properly colored with \(C_{1}\), then we're done. If not, \(C_{1}\) on \(V_{1}\) and \(V_{2}\) is the unique bad 2-assignment on \(K_{2,4}\).
\[L_{k}(V_{1})\cap C_{1} L_{k}(V_{2})\cap C_{1}\] \[\{1,2\} \{1,3\}\] \[\{3,4\} \{1,4\}\] \[\{2,3\}\] \[\{2,4\}\]
Uncolor \(V_{3}\) with \(C_{2}\) and color \(V_{2}\) with \(C_{2}\). If \(V_{1}\) and \(V_{3}\) can't be colored with \(C_{1}\), then \(C_{1}\) on \(V_{1}\) and \(V_{3}\) contains the unique bad 2-assignment on \(K_{2,4}\).
\[L_{k}(V_{1})\cap C_{1} L_{k}(V_{2})\cap C_{1} L_{k}(V_{3})\cap C_{1}\] \[\{1,2\} \{1,3\} \{1,3\}\] \[\{3,4\} \{1,4\} \{1,4\}\] \[\{2,3\} \{2,3\}\] \[\{2,4\} \{2,4\}\] \[\{a,b\}\]
Note that \(a\) and \(b\) are unknown colors from \(C_{1}\). Uncolor \(V_{2}\) with \(C_{2}\) and color \(V_{1}\) with \(C_{2}\). If \(1\in\{a,b\}\), then we can color \(V_{2}\) with \(3,4\) and \(V_{3}\) with \(1,2\). We can do the same if \(2\in\{a,b\}\). Otherwise, we can color \(V_{2}\) with \(1,2\) and \(V_{3}\) with \(3,4,a\) since \(a\neq 1,2\). Hence, \(G_{k}\) is \(\lambda_{k}\)-choosable, and therefore not strictly \(k\)-colorable.
## Acknowledgements
I thank my former advisor Greg Puleo for introducing me to Zhu's paper on \(\lambda\)-choosability. I thank my current advisor Pete Johnson for his guidance and helpful discussions. | |
2303.17592 | Learning Human-to-Robot Handovers from Point Clouds | We propose the first framework to learn control policies for vision-based
human-to-robot handovers, a critical task for human-robot interaction. While
research in Embodied AI has made significant progress in training robot agents
in simulated environments, interacting with humans remains challenging due to
the difficulties of simulating humans. Fortunately, recent research has
developed realistic simulated environments for human-to-robot handovers.
Leveraging this result, we introduce a method that is trained with a
human-in-the-loop via a two-stage teacher-student framework that uses motion
and grasp planning, reinforcement learning, and self-supervision. We show
significant performance gains over baselines on a simulation benchmark,
sim-to-sim transfer and sim-to-real transfer. | Sammy Christen, Wei Yang, Claudia Pérez-D'Arpino, Otmar Hilliges, Dieter Fox, Yu-Wei Chao | 2023-03-30T17:58:36 | http://arxiv.org/abs/2303.17592v1 | # Learning Human-to-Robot Handovers from Point Clouds
###### Abstract
We propose the first framework to learn control policies for vision-based human-to-robot handovers, a critical task for human-robot interaction. While research in Embodied AI has made significant progress in training robot agents in simulated environments, interacting with humans remains challenging due to the difficulties of simulating humans. Fortunately, recent research has developed realistic simulated environments for human-to-robot handovers. Leveraging this result, we introduce a method that is trained with a human-in-the-loop via a two-stage teacher-student framework that uses motion and grasp planning, reinforcement learning, and self-supervision. We show significant performance gains over baselines on a simulation benchmark, sim-to-sim transfer and sim-to-real transfer. Video and code are available at [https://handover-sim2real.github.io](https://handover-sim2real.github.io).
## 1 Introduction
Handing over objects between humans and robots is an important tasks for human-robot interaction (HRI) [41]. It allows robots to assist humans in daily collaborative activities, such as helping to prepare a meal, or to exchange tools and parts with human collaborators in manufacturing settings. To complete these tasks successfully and safely, intricate coordination between human and robot is required. This is challenging, because the robot has to react to human behavior, while only having access to sparse sensory inputs such as a single camera with limited field of view. Therefore, a need for methods that solve interactive tasks such as handovers purely from vision input arises.
Bootstrapping robot training in the real world can be unsafe and time-consuming. Therefore, recent trends in Embodied AI have focused on training agents to act and interact in simulated (sim) environments [13, 14, 54, 51, 53, 54, 61]. With advances in rendering and physics simulation, models have been trained to map raw sensory input to action output, and can even be directly transferred from simulation to the real world [2, 50]. Many successes have been achieved particularly around the suite of tasks of robot navigation, manipulation, or a combination of both. In contrast to these areas, little progress has been made around tasks pertained to HRI. This is largely hindered by the challenges in embedding realistic human agents in these environments, since
modeling and simulating realistic humans is challenging.
Despite the challenges, an increasing number of works have attempted to embed realistic human agents in simulated environments [18, 42, 43, 7, 58, 44, 11, 19]. Notably, a recent work has introduced a simulation environment ("Handover-Sim") for human-to-robot handover (H2R) [7]. To ensure a realistic human handover motion, they use a large motion capture dataset [8] to drive the movements of a virtual human in simulation. However, despite the great potential for training robots, the work of [7] only evaluates off-the-shelf models from prior work, and has not explored any policy training with humans in the loop in their environment.
We aim to close this gap by introducing a vision-based learning framework for H2R handovers that is trained with a human-in-the-loop (see Fig. 1). In particular, we propose a novel mixed imitation learning (IL) and reinforcement learning (RL) based approach, trained by interacting with the humans in HandoverSim. Our approach draws inspiration from a recent method for learning polices for grasping static objects from point clouds [60], but proposes several key changes to address the challenges in H2R handovers. In contrast to static object grasping, where the policy only requires object information, we additionally encode human hand information in the policy's input. Also, compared to static grasping without a human, we explicitly take human collisions into account in the supervision of training. Finally, the key distinction between static object grasping and handovers is the dynamic nature of the hand and object during handover. To excel on the task, the robot needs to react to dynamic human behavior. Prior work typically relies on open-loop motion planners [59] to generate expert demonstrations, which may result in suboptimal supervision for dynamic cases. To this end, we propose a two-stage training framework. In the first stage, we fix the humans to be stationary and train an RL policy that is partially guided by expert demonstrations obtained from a motion and grasp planner. In the second stage, we finetune the RL policy in the original dynamic setting where the human and robot move simultaneously. Instead of relying on a planner, we propose a self-supervision scheme, where the pre-trained RL policy serves as a teacher to the downstream policy.
We evaluate our method in three "worlds" (see Fig. 1). First, we evaluate on the "native" test scenes in HandoverSim [7], which use the same backend physics simulator (Bullet [12]) as training but unseen handover motions from the simulated humans. Next, we perform sim-to-sim evaluation on the test scenes implemented with a different physics simulator (Isaac Gym [35]). Lastly, we investigate sim-to-real transfer by evaluating polices on a real robotic system and demonstrate the benefits of our method.
We contribute: i) the first framework to train human-to-robot handover tasks from vision input with a human-in-the-loop, ii) a novel teacher-student method to train in the setting of a jointly moving human and robot, iii) an empirical evaluation showing that our approach outperforms baselines on the HandoverSim benchmark, iv) transfer experiments indicating that our method leads to more robust sim-to-sim and sim-to-real transfer compared to baselines.
## 2 Related Work
Human-to-Robot HandoversEncouraging progress in hand and object pose estimation [32, 33, 26] has been achieved, aided by the introduction of large hand-object interaction datasets [6, 24, 23, 20, 34, 38, 55, 66, 8]. These developments enable applying model-based grasp planning [4, 5, 37], a well-studied approach in which full pose estimation and tracking are needed, to H2R handovers [8, 49]. However, these methods require the 3D shape models of the object and cannot handle unseen objects. Alternatively, some recent works [63, 64, 47, 15, 44] achieve H2R handover by employing learning-based grasp planners to generate grasps for novel objects from raw vision inputs such as images or point clouds [39, 40]. While promising results have been shown, these methods work only on an open-loop sequential setting in which the human hand has to stay still once the robot starts to move [47], or need complex hand-designed cost functions for grasp selection [63] and robot motion planning [36, 64] for reactive handovers, which requires expertise in robot motion and control. Hence, these methods are difficult to reproduce and deploy to new environments. Progress towards dynamic simultaneous motion has been shown by a learning-based method [58], using state inputs, leaving an open challenge for training policies that receive visual input directly. In contrast, we propose to learn control policies together with grasp prediction for handovers in an end-to-end manner from segmented point clouds with a deep neural net. To facilitate easy and fair comparisons among different handover methods, [7] propose a physics-simulated environment with diverse objects and realistic human handover behavior collected by a mocap system [8]. They provide benchmark results of several previous handover systems, including a learning-based grasping policy trained with static objects [60]. However, learning a safe and efficient handover policy is not trivial with a human-in-the-loop, which we address in this work.
Policy Learning for GraspingObject grasping is an essential skill for many robot tasks, including handovers. Prior works usually generate grasp poses given a known 3D object geometry such as object shape or pose [4, 5, 37], which is nontrivial to obtain from real-world sensory input such as images or point clouds. To overcome this, recent works train deep neural networks to predict grasps from sensor data [30] and compute trajectories to reach the predicted grasp pose. Though 3D object geometry is no longer needed, the feasibility is not guaranteed since the grasp
prediction and trajectory planning are computed separately. Some recent works directly learn grasping policies given raw sensor data. [28] propose a self-supervised RL framework based on RGB images to learn a deep Q-function from real-world grasps. To improve data efficiency, [52] use a low-cost handheld device to collect grasping demonstrations with a wrist-mounted camera. They train an RL-based 6-DoF closed-loop grasping policy with these demonstrations. [60] combines imitation learning from expert data with RL to learn a control policy for object grasping from point clouds. Although this method performs well in HandoverSim [7] when the human hand is not moving, it has difficulty coordinating with a dynamic human hand since the policy is learned with static objects. Instead, our policy is directly learned from large-scale dynamic hand-object trajectories obtained from the real world. To facilitate the training for the dynamic case, we propose a two-stage teacher-student framework, that is conceptually inspired by [9], which has been proven critical through experiments.
## 3 Background
### Reinforcement Learning
MDPWe formalize RL as a Markov Decision Process (MDP), that consists of a 5-tuple \(\mathcal{M}=(\mathcal{S},\mathcal{A},\mathcal{R},\mathcal{T},\gamma)\), where \(\mathcal{S}\) is the state space, \(\mathcal{A}\) the action space, \(\mathcal{R}\) a scalar reward function, \(\mathcal{T}\) a transition function that maps state-action pairs to distributions over states, and \(\gamma\) a discount factor. The goal is to find a policy that maximizes the long-term reward: \(\boldsymbol{\pi}^{*}=\operatorname*{arg\,max}_{\boldsymbol{\pi}}\mathbb{E} \sum_{t=0}^{t=T}\gamma^{t}\mathcal{R}(\mathbf{s}_{t})\), with \(\mathbf{s}_{t}\sim\mathcal{T}(\mathbf{s}_{t-1},\mathbf{a}_{t-1})\) and \(\mathbf{a}_{t-1}\sim\boldsymbol{\pi}(\mathbf{s}_{t-1})\).
Learning AlgorithmIn this work, we use TD3 [21], a common algorithm for continuous control. It is an actor-critic method, which consists of a policy \(\boldsymbol{\pi}_{\theta}(\mathbf{s})\) (actor) and a Q-function approximator \(Q_{\phi}(\mathbf{s},\mathbf{a})\) (critic) that predicts the expected return from a state-action pair. Both are represented by neural networks with parameters \(\theta\) and \(\phi\). TD3 is off-policy, and hence there is a replay buffer in which training transitions are stored. During training, both the actor and critic are updated using samples from the buffer. To update the critic, we minimize the Bellman error:
\[L_{\text{BE}}(\phi)\!=\!\!\mathbb{E}_{\mathcal{M}}\Big{[}\big{(}Q_{\phi}( \mathbf{s}_{t},\!\mathbf{a}_{t})\!-\!r(\mathbf{s}_{t},\!\mathbf{a}_{t})\!+\! \gamma Q_{\phi}(\mathbf{s}_{t+1},\!\mathbf{a}_{t+1})\big{)}^{2}\Big{]} \tag{1}\]
For the actor network, the policy parameters are trained to maximize the Q-values:
\[L_{\text{DDPG}}(\theta)=\mathbb{E}_{\boldsymbol{\pi}}\left[Q_{\phi}(\mathbf{s }_{t},\mathbf{a}_{t})|\mathbf{s}_{t},\mathbf{a}_{t}=\boldsymbol{\pi}_{\theta} (\mathbf{s}_{t})\right] \tag{2}\]
For more details, we refer the reader to [21].
### HandoverSim Benchmark
HandoverSim [7] is a benchmark for evaluating H2R handover policies in simulation. The task setting consists of a tabletop with different objects, a Panda 7DoF robotic arm with a gripper and a wrist-mounted RGB-D camera, and a simulated human hand. The task starts with the human grasping an object and moving it to a handover pose. The robot should move to the object and grasp it. The task is successful if the object has been grasped from the human without collision and brought to a designated position without dropping. To accurately model the human, trajectories from the DexYCB dataset [8], which comprises a large amount of human-object interaction sequences, are replayed in simulation. Several baselines [59, 60, 63] are provided for comparison. The setup in HandoverSim has only been used for handover performance evaluation purposes, whereas in this work we utilize it as a learning environment.
## 4 Method
The overall pipeline is depicted in Fig. 2 and consists of three different modules: perception, vision-based control, and the handover environment. The perception module receives egocentric visual information from the handover environment and processes it into segmented point clouds. The vision-based control module receives the point clouds and predicts the next action for the robot and whether to approach or to grasp the object. This information is passed to the handover environment, which updates the robot state and sends the new visual information to the perception module. Note that the input to our method comes from the wrist-mounted camera, i.e., there is no explicit information, such as object or hand pose, provided to the agent. We will now explain each of the modules of our method in more detail.
### Handover Environment
We split the handover task into two distinct phases (see Fig. 2). First, during the _approaching phase_, the robot moves to a pre-grasp pose that is close to the object by running the learned control policy \(\boldsymbol{\pi}\). A learned grasp predictor \(\boldsymbol{\sigma}\) continuously computes a grasp probability to determine when the system can proceed to the second phase. Once the pre-grasp pose is reached and the grasp prediction is confident to take over the object from the human, the task will switch to the _grasping phase_, in which the end-effector moves forward to the final grasp pose in open-loop fashion and closes the gripper to grasp the object. Finally, after object grasping, the robot follows a predetermined trajectory to retract to a base position and complete the episode. This task logic is used in both our simulation environment and the real robot deployment. Sequencing based on a pre-grasp pose is widely used in literature for dynamic grasping [1].
We follow the HandoverSim task setup [7], where the hu
man hand and objects are simulated by replaying data from the DexYCB dataset [8] (see Sec. 3.2). First, actions \(\mathbf{a}\) in the form of the next 6DoF end-effector pose (translation and rotation) are received from the policy \(\boldsymbol{\pi}(\mathbf{a}|\mathbf{s})\). We then convert the end-effector pose into a target robot configuration using inverse kinematics. Thereafter, we use PD-controllers to compute torques, which are applied to the robot. Finally, the visual information is rendered from the robot's wrist-mounted RGB-D camera and sent to the perception module.
### Perception
Our policy network takes a segmented hand and object point cloud as input. In the handover environment, we first render an egocentric RGB-D image from the wrist camera. Then we obtain the object point cloud \(\mathbf{p}_{o}\) and hand point cloud \(\mathbf{p}_{h}\) by overlaying the ground-truth segmentation mask with the RGB-D image. Since the hand and object may not always be visible from the current egocentric view, we keep track of the last available point clouds. The latest available point clouds are then sent to the control module.
### Vision-Based Control
Input RepresentationDepending on the amount of points contained in the hand point cloud \(\mathbf{p}_{h}\) and object point cloud \(\mathbf{p}_{o}\), we down- or upsample them into constant size. Next, we concatenate the two point clouds into a single point cloud \(\mathbf{p}\) and add two one-hot-encoded vectors to indicate the locations of object and hand points within \(\mathbf{p}\). We then encode the point cloud into a lower dimensional representation \(\psi(\mathbf{p})\) by passing it through PointNet++ [45]. Finally, the lower dimensional encoding \(\psi(\mathbf{p})\) is passed on to the control policy \(\boldsymbol{\pi}\) and the grasp prediction network \(\boldsymbol{\sigma}\).
Control PolicyThe policy network \(\boldsymbol{\pi}(\mathbf{a}|\psi(\mathbf{p}))\) is a small, two-layered MLP that takes the PointNet++ embedding as input state (\(\mathbf{s}=\psi(\mathbf{p})\)) and predicts actions \(\mathbf{a}\) that correspond to the change in 6DoF end-effector pose. These are passed on to the handover environment.
Grasp PredictionWe introduce a grasp prediction network \(\boldsymbol{\sigma}(\psi(\mathbf{p}))\) that predicts when the robot should switch from approaching to executing the grasping motion (cf. Fig. 2). We model grasp prediction as a binary classification task. The input corresponds to the PointNet++ embedding \(\psi(\mathbf{p})\), which is fed through a 3-layered MLP. The output is a probability that indicates the likelihood of a successful grasp given the current point cloud feature. If the probability is above a tunable threshold, we execute an open-loop grasping motion. The model is trained offline with pre-grasp poses attained from [17]. We augment the dataset by adding random noise to pre-grasp poses. To determine the labels, we initialize the robot with the pre-grasp poses in the physics simulation and execute the forward grasping motion. The label is one if the grasp is successful, and zero otherwise. We use a binary cross-entropy loss for training.
### Two-Stage Teacher-Student Training
We aim at training a handover policy capable of moving simultaneously with the human. Training this policy directly in the setting of dynamic motion is challenging because expert demonstrations with open-loop planners to guide training can only be obtained when the human is stationary. A key contribution of our work is a two-stage training scheme for handovers that incrementally trains the policy to alleviate this challenge. In the first stage, we pretrain
Figure 2: **Method Overview**. The **Perception** module takes egocentric RGB-D and segmentation images from the environment and outputs a hand/object segmented point cloud. Next, the segmented point cloud is passed to the the **Vision-based Control** module and processed by PointNet++ [45] to obtain a lower-dimensional representation. This embedding is used as input to both the control policy and the grasp predictor. Each task episode in the **Handover Environment** follows two phases: during the approaching phase, the robot moves towards a pre-grasp pose, driven by the control policy \(\boldsymbol{\pi}\) that outputs end-effector actions \(\mathbf{a}\). A learned grasp predictor monitors the motion and determines when the robot should switch into the grasping phase, which follows the steps: 1. moving the gripper forward from a pre-grasp to a grasping pose 2. closing the gripper 3. retracting the object to a designated location, after which the episode ends.
in a setting where the robot only starts moving once the human has stopped (**sequential**). This pretrained policy is further finetuned in the second stage in which the human and robot move simultaneously (**simultaneous**).
Pretraining in Sequential SettingIn the sequential setting, the robot starts moving once the human has come to a stop (see Fig. 3, top left). To grasp the object from the stationary human hand, we leverage motion planning to provide expert demonstrations. During data collection, we alternate between motion planning and RL-based exploration. In both cases, we store the transitions \(\mathbf{d}_{t}=\{\mathbf{p}_{t},\mathbf{a}_{t},\mathbf{g}_{t},\mathbf{r}_{t}, \mathbf{p}_{t+1},\mathbf{e}_{t}\}\) in a replay buffer \(\mathcal{D}\), from which we sample during network training. The term \(\mathbf{p}_{t}\) and \(\mathbf{p}_{t+1}\) indicate the point cloud and the next point cloud, \(\mathbf{a}_{t}\) the action, \(\mathbf{g}_{t}\) the pre-grasp goal pose, \(\mathbf{r}_{t}\) the reward, and \(\mathbf{e}_{t}\) an indicator of whether the transition is from the expert.
Inspired by [60], we collect expert trajectories with the OMG planner [59] that leverages ground-truth states. Note that some expert trajectories generated by the planner result in collision with the hand, which is why we introduce an offline pre-filtering scheme. We first parse the ACRONYM dataset [16] for potential grasps. We then run collision checking to filter out grasps where the robot and human hand collide. For the set of remaining collision-free grasps, we plan trajectories to grasp the object and execute them in open-loop fashion. On the other hand, the RL policy \(\boldsymbol{\pi}_{\text{pre}}\) explores the environment and receives a sparse reward, i.e., the reward is one if the task is completed successfully, otherwise zero. Hence, collisions with the human will get implicitly penalized by not receiving any positive reward.
Finetuning in Simultaneous SettingIn this setting, the human and robot move at the same time. Hence, we cannot rely on motion and grasp planning to guide the policy. On the other hand, simply taking the pre-trained policy \(\boldsymbol{\pi}_{\text{pre}}\) from the sequential setting and continue training it without an expert leads to an immediate drop in performance. Hence, we introduce a self-supervision scheme for stability reasons, i.e., we want to keep the finetuning policy close to the pre-trained policy. To this end, we replace the expert planner from the sequential setting by an expert policy \(\boldsymbol{\pi}_{\text{exp}}\), which is initialized with the weights of the pre-trained policy \(\boldsymbol{\pi}_{\text{pre}}\) that already provides a reasonable prior policy (see Fig. 3 bottom left). Therefore, we have two policies: i) the expert policy \(\boldsymbol{\pi}_{\text{exp}}\) as proxy for the motion and grasping planner. We freeze the network weights of this policy, ii) the finetuning policy \(\boldsymbol{\pi}_{\star}\) and critic \(\boldsymbol{Q}_{\star}\), which are initialized with the weights of the pre-trained policy \(\boldsymbol{\pi}_{\text{pre}}\) and critic \(\boldsymbol{Q}_{\text{pre}}\), respectively. We proceed to train these two networks using the loss functions which we describe next.
Network TrainingDuring training, we sample a batch of random transitions from the replay buffer \(\mathcal{D}\). The policy network is trained using a combination of behavior cloning, RL-based losses and an auxiliary objective. In particular, the policy is updated using the following loss function:
\[L(\theta)=\lambda L_{\text{BC}}+(1-\lambda)L_{\text{DDPG}}+L_{\text{AUX}}, \tag{3}\]
Figure 3: **Training Procedure**. In the **pretraining stage** (top left box), the human hand is stationary. We alternate between collecting expert demonstrations via motion planning and exploration data with the RL policy \(\boldsymbol{\pi}_{\text{pre}}\). Transitions \(\mathbf{d}\) are stored in a replay buffer \(\mathcal{D}\). During training (green box, right), a batch of randomly sampled transitions from the replay buffer is passed through PointNet++ and the actor and critic networks. In the **finetuning stage** (bottom left box), the human and robot move concurrently. The expert motion planner is replaced by the expert policy \(\boldsymbol{\pi}_{\text{exp}}\), which shares the weights of the pretrained policy \(\boldsymbol{\pi}_{\text{pre}}\). This policy network will be kept frozen for the rest of training and serves as a regularizer for the RL agent. The RL agent’s actor network \(\boldsymbol{\pi}_{\star}\) and critic network \(\boldsymbol{Q}_{\star}\) are also initialized with the weights of pretrained agent’s networks, but the model will be updated during finetuning. In this stage, transitions are stored in a new replay buffer \(\mathcal{D}_{\star}\). Data is sampled solely from this buffer during finetuning.
where \(L_{\text{BC}}\) is a behavior cloning loss that keeps the policy close to the expert policy, \(L_{\text{DDPG}}\) is the standard actor-critic loss described in Eq. 2, and \(L_{\text{AUX}}\) is an auxiliary objective that predicts the grasping goal pose of the end-effector. The coefficient \(\lambda\) balances the behavior cloning and the RL objective. The critic loss is defined as:
\[L(\phi)=L_{\text{BE}}+L_{\text{AUX}}, \tag{4}\]
where \(L_{\text{BE}}\) indicates the Bellman error from Eq. 1 and \(L_{\text{AUX}}\) is the same auxiliary loss used in Eq. 3. We refer the reader to supplementary material or [60] for more details.
## 5 Experiments
We first evaluate our approach in simulation using the HandoverSim benchmark (Sec. 5.1). Next, we investigate the performance of sim-to-sim transfer by evaluating the trained models on the test environments powered by a different physics engine (Sec. 5.2). Finally, we apply the trained model to a real-world robotic system and analyze the performance of sim-to-real transfer (Sec. 5.3).
### Simulation Evaluation
**Setup** HandoverSim [7] contains 1,000 unique H2R handover scenes divided into train, val, and test splits. Each scene contains a unique human handover motion. We evaluate on the "s0" setup which contains 720 training and 144 testing scenes. See the supp. material for evaluations on unseen objects, subjects, and handedness. Following the evaluation of GA-DDPG [60] in [7], we consider two settings: (1) the "sequential" setting where the robot is allowed to move only after the human hand reaches the handover location and remains static there (i.e., "hold" in [7]), and (2) the "simultaneous" setting where the robot is allowed to move from the beginning of the episode (i.e., "w/o hold" in [7]).
**Metrics** We follow the evaluation protocol in HandoverSim [7]. A handover is considered successful if the robot grasps the object from the human hand and moves it to a designated location. A failure is claimed and the episode is terminated if any of the following three conditions occur: (1) the robot collides with the hand (_contact_), (2) the robot drops the object (_drop_), or (3) a maximum time limit is reached (_timeout_). Besides efficacy, the benchmark also reports efficiency in time. The time metric is further broken down into (1) the execution time (_exec_), i.e., the time to physically move the robot, and (2) the planning time (_plan_), i.e., the time spent on running the policy. All reported metrics are averaged over the rollouts on the test scenes.
**Baselines** Our primary baseline is GA-DDPG [60]. Besides comparing with the original model (i.e., trained in [60] for table-top grasping and evaluated in [7]), we additionally compare with a variant finetuned on HandoverSim ("GA-DDPG [60] finetuned"). For completeness, we also include two other baselines from [7]: "OMG Planner [59]" and "Yang et al. [63]". However, both of them are evaluated with ground-truth state input in [7] and thus are not directly comparable with our method.
**Results** Tab. 1 reports the evaluation results on the test scenes. In the sequential setting, our method significantly outperforms all the baselines in terms of success rate, even compared to methods that use state-based input. Our method is slightly slower on average than GA-DDPG in terms of total time needed for handovers. In the simultaneous setting, our method clearly outperforms GA-DDPG, which has low success rates. Qualitatively, we observe that GA-DDPG directly tries to grasp the object from the user while it is still moving, while our method follows the hand and finds a feasible grasp once the hand has come to a stop, resulting in a trade-off on the overall execution time. We provide a qualitative example of this behavior in Fig. 4 (a) and in the supplementary video. We also refer to the supp. material for a discussion of limitations and a robustness analysis of our pipeline under noisy observations.
**Ablations** We evaluate our design choices in an ablation study and report the results in Tab. 2. We analyze the vision backbone by replacing PointNet++ with a ResNet18 [27]
\begin{table}
\begin{tabular}{l|l|c c c c|c c c c} \hline \hline & & \multicolumn{2}{c}{success (\%)} & \multicolumn{3}{c|}{mean accum time (s)} & \multicolumn{3}{c}{failure (\%)} \\ & & & exec & plan & total & contact & drop & timeout & total \\ \hline \multirow{6}{*}{**Model**} & OMG Planner [59]\(\dagger\) & 62.50 & 8.309 & 1.414 & 9.722 & 27.78 & 8.33 & 1.39 & 37.50 \\ & Yang et al. [63]\(\ddagger\) & 64.58 & 4.864 & 0.036 & 4.900 & 17.36 & 11.81 & 6.25 & 35.42 \\ \cline{2-10} & GA-DDPG [60] & 50.00 & 7.139 & 0.142 & 7.281 & **4.86** & 19.44 & 25.69 & 50.00 \\ & GA-DDPG [60] finetuned & 57.18 & **6.324** & **0.086** & **6.411** & 6.48 & 27.08 & 9.26 & 42.82 \\ & Ours & **75.23** & 7.743 & 0.177 & 7.922 & 9.26 & **13.43** & **2.08** & **24.77** \\ \hline \multirow{6}{*}{**Model**} & GA-DDPG [60] & 36.81 & **4.664** & 0.132 & **4.796** & 9.03 & 25.00 & 29.17 & 63.19 \\ & GA-DDPG [60] finetuned & 54.86 & 4.832 & **0.082** & 4.914 & **6.71** & 26.39 & 12.04 & 45.14 \\ \cline{1-1} & Ours & **68.75** & 6.232 & 0.178 & 6.411 & 8.80 & **17.82** & **4.63** & **31.25** \\ \hline \hline \end{tabular}
\end{table}
Table 1: **HandoverSim Benchmark Evaluation.** Comparison of our method against various baselines from the HandoverSim benchmark [7]. In the sequential setting, we find that our baseline achieves better overall success rates than the baselines. In the simultaneous setting, we outperform the applicable baselines by large margins. The results for our method are averaged across 3 random seeds. \(\dagger\): both methods [63, 59] are evaluated with ground-truth states in [7] and thus are not directly comparable with ours.
that processes the RGB and depth/segmentation (DM) images. Similar to the findings in GA-DDPG, the PointNet++ backbone performs better. Next, we train our method from third person view instead of egocentric view and without active hand segmentation (_w/o hand point cloud_), i.e., the policy only perceives the object point cloud but not the hand point cloud. We also ablate the auxiliary prediction (_w/o aux prediction_) and evaluate a variant that directly learns to approach and grasp the object instead of using the two task phases of approaching and grasping (_w/o standoff_). Lastly, we compare against our pretrained model, which was only trained in the sequential setting without finetuning (_w/o finetuning_). We find that the ablated components comprise important elements of our method. The results indicate an in
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline \multicolumn{5}{c}{Ablation Study} \\ \hline & success (\%) & \multicolumn{3}{c}{failure (\%)} \\ & \multicolumn{3}{c}{contact} & drop & timeout \\ \hline w/ RGBDM + ResNet18 & 34.10 & **6.20** & 45.80 & 13.90 \\ \hline w/ third person view & 60.42 & 9.95 & 25.69 & 3.94 \\ \hline w/o hand point cloud & 59.03 & 24.07 & **11.58** & 5.32 \\ \hline w/o aux prediction & 70.60 & 10.65 & 16.20 & 2.54 \\ \hline w/o standoff & 52.55 & 7.87 & 36.80 & 2.78 \\ \hline w/o finetuning & 73.38 & 9.03 & 13.89 & 3.70 \\ \hline Ours & 75.23 & 9.26 & 13.43 & **2.08** \\ \hline w/o finetuning & 62.27 & 11.81 & 20.37 & 5.56 \\ \hline Ours simult. & **68.75** & **8.8** & **17.82** & **4.63** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Ablation.** We ablate the vision backbone, hand perception, and egocentric view. We also study the effect of finetuning, the auxiliary prediction, and splitting the task into two phases. All design choices are crucial aspects of our method with regards to overall performance. Results are averaged over 3 random seeds.
Figure 4: **Qualitative results.** We provide a comparison to show our methods’ advantages over GA-DDPG [60]. (a) Our method reacts to the moving human, while the baseline tries to go for a grasp directly, which leads to collision. (b) In the sim-to-sim transfer, we often find that the baseline does not find a grasp on the object. (c) In the sim-to-real experiment, GA-DDPG usually tries to get to a grasp directly, while our method adjusts the gripper into a stable grasping pose first. See the video in supp. material for more qualitative examples.
creased amount of hand collision or object drop in all ablations. A closer analysis in the simultaneous setting shows that our finetuned model outperforms the pretrained model.
### Sim-to-Sim Transfer
Instead of directly transferring to the real world, we first evaluate the robustness of the models by transferring them to a different physics simulator. We re-implement the HandoverSim environment following the mechanism presented in [7] except for replacing the backend physics engine from Bullet [12] to Isaac Gym [35]. We then evaluate the models trained on the original Bullet-based environment on the test scenes powered by Isaac Gym. The results are presented in Tab. 3. We observe a significant drop for GA-DDPG on the success rates (i.e., to below 20%) in both settings. Qualitatively, we see that grasps are often either missed completely or only partially grasped (see Fig. 4 (b)). On the other hand, our method is able to retain higher success rates. Expectedly, it also suffers from a loss in performance. We analyze the influence of our grasp predictor on transfer performance and compare against a variant where we execute the grasping motion after a fixed amount of time (_Ours w/o grasp pred._), which will leave the robot enough time to find a pre-grasp pose. Part of the performance drop is caused by the grasp predictor initiating the grasping phase at the wrong time, which can be improved upon in future work.
### Sim-to-Real Transfer
Finally, we deploy the models trained in HandoverSim on a real robotic platform. We follow the perception pipeline used in [60, 63] to generate segmented hand and object point clouds for the policy, and use the output to update the end effector's target position. We compare our method against GA-DDPG [60] with two sets of experiments: (1) a pilot study with controlled handover poses and (2) a user evaluation with free-form handovers. For experimental details and the full results, please see the supp. material.
Pilot Study We first conduct a pilot study with two subjects. The subjects are instructed to handover 10 objects from HandoverSim by grasping and presenting the objects in controlled poses. For each object, we test with 6 poses (3 poses for each hand) with varying object orientation and varying amount of hand occlusion, resulting in 60 poses per subject. The same set of poses are used in testing both our model and GA-DDPG [60]. The success rates are shown Tab. 4. Results indicate that our method outperforms GA-DDPG [60] for both subjects on the overall success rate (i.e., 41/60 versus 21/60 for Subject 1). Qualitatively, we observe that GA-DDPG [60] tends to fail more from unstable grasping as well as hand collision. Fig. 4 (c) shows two examples of the real world handover trials.
User Evaluation We further recruited \(6\) users to compare the two methods and collected feedback from a questionnaire with Likert-scale and open-ended questions. In contrast to the pilot study, we asked the users to handover the 10 objects in ways that are most comfortable to them. We repeated the same experimental process for both methods, and counterbalanced the order to avoid bias. From participants' feedback, the majority agreed that the timing of our method is more appropriate and our method can adjust between different object poses better. The interpretability of the robot's motion was also acknowledged by their comments. Please see the supp. material for more details.
## 6 Conclusion
In this work, we have presented a learning-based framework for human-to-robot handovers from vision input with a simulated human-in-the-loop. We have introduced a two-stage teacher-student training procedure. In our experiments we have shown that our method outperforms baselines by a significant margin on the HandoverSim benchmark [7]. Furthermore, we have demonstrated that our approach is more robust when transferring to a different physics simulator and a real robotic system.
Acknowledgements We thank Tao Chen and Adithyavaira-van Murali for laying the groundwork, Livni Wang for the help with GA-DDPG, and Mert Albaba, Christoph Gebhardt, Thomas Langerak and Juan Zarate for their feedback on the manuscript.
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \hline & \multicolumn{2}{c|}{Subject 1} & \multicolumn{2}{c}{Subject 2} \\ \cline{2-5} & GA-DDPG & Ours & GA-DDPG & Ours \\ & [60] & Ours & [60] & Ours \\ \hline
011.human & 3 / 6 & **6 / 6** & **6 / 6** & 5 / 6 \\ \hline
037.scissense & 2 / 6 & **5 / 6** & 3 / 6 & **5 / 6** \\
006.unstard.bottle & 1 / 6 & **3 / 6** & 2 / 6 & **4 / 6** \\
024.bowd & 3 / 6 & **4 / 6** & **3 / 6** & **3 / 6** \\
040.large.marker & 0 / 6 & **4 / 6** & 4 / 6 & **5 / 6** \\
003.crateker.box & **3 / 6** & **2 / 6** & 0 / 6 & **2 / 6** \\
052.crateker.j\_clamp & 1 & **6 / 4 / 6** & **5 / 6** & **5 / 6** \\
008.pudding.box & 3 / 6 & **6 / 6** & **4 / 6** & **4 / 6** \\
010.pudget.meat.can & 2 / 6 & **2 / 6** & 3 / 6 & **4 / 6** \\
021.bleach.cleanster & 3 / 6 & **5 / 6** & 3 / 6 & **4 / 6** \\
021.bleach.cleanster & 21 / 60 & **41 / 60** & 3 / 60 & **41 / 60** \\ \hline
**total** & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Sim-to-Real Experiment.** Success rates of the pilot study. Our method outperforms GA-DDPG [60] for both subjects.
\begin{table}
\begin{tabular}{l|l|c c c c} \hline \hline & \multicolumn{4}{c}{Sim-to-Sim} \\ \hline & & \multicolumn{2}{c}{success (\%)} & \multicolumn{2}{c}{failure (\%)} \\ & & contact & drop & timeout \\ \hline \multirow{4}{*}{\begin{tabular}{l} **Model** \\ \end{tabular} } & GA-DDPG [60] & 19.44 & **4.86** & 47.22 & 28.47 \\ \cline{2-5} & GA-DDPG [60] & finetuned & 11.81 & 6.25 & 68.75 & **13.19** \\ \cline{2-5} & Ours & 44.21 & 9.49 & 40.51 & 5.79 \\ \cline{2-5} & Ours w/o grasp & **54.40** & 7.87 & **33.34** & **4.40** \\ \hline \multirow{4}{*}{
\begin{tabular}{l} **Model** \\ \end{tabular} } & GA-DDPG [60] & 11.11 & 15.97 & 48.61 & 24.31 \\ \cline{2-5} & GA-DDPG [60] finetuned & 16.67 & 9.72 & 63.89 & 9.72 \\ \cline{1-1} \cline{2-5} & Ours & 39.58 & **9.03** & 43.75 & 7.64 \\ \cline{1-1} \cline{2-5} & Ours w/o grasp pred. & **47.92** & 10.65 & **35.88** & **5.56** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Sim-to-Sim Experiment.** We evaluate sim-to-sim transfer of the learning-based method to Isaac Gym [35], Our method shows better transfer capabilities than GA-DDPG [60]. | ```
私たちが提案する最初のフレームワークは、視覚に基づいた人間-ロボットハンドオーバーの制御ポリシーを学習するためのものです。これは人間-ロボットインタラクションの críticasタスクです。
Embodied AIの研究は、ロボット代理をシミュレーション環境でトレーニングするのに成功しており、人間とのインタラクションは、人間のシミュレーションが困難であるため、その課題に直面しています。
しかし、最近の研究では、人間-ロボットハンドオーバーのための現実的なシミュレーション環境が開発されています。
この結果を利用して、人間をLoopに組み込んだ2段階の教師-学生フレームワークを用いた、運動とつかみ計画、強化学習、そして自己教師として訓練された方法を導入します。
実験結果では、基線と比較してシミュレーションベンチマークで有意なパフォーマンスの向上が見られました。
``` |
2305.05218 | Graph Neural Network-based surrogate model for granular flows | Accurate simulation of granular flow dynamics is crucial for assessing
various geotechnical risks, including landslides and debris flows. Granular
flows involve a dynamic rearrangement of particles exhibiting complex
transitions from solid-like to fluid-like responses. Traditional continuum and
discrete numerical methods are limited by their computational cost in
simulating large-scale systems. Statistical or machine learning-based models
offer an alternative. Still, they are largely empirical, based on a limited set
of parameters. Due to their permutation-dependent learning, traditional machine
learning-based models require huge training data to generalize. To resolve
these problems, we use a graph neural network, a state-of-the-art machine
learning architecture that learns local interactions. Graphs represent the
state of dynamically changing granular flows and the interaction laws, such as
energy and momentum exchange between grains. We develop a graph neural
network-based simulator (GNS) that takes the current state of granular flow and
predicts the next state using Euler explicit integration by learning the local
interaction laws. We train GNS on different granular trajectories. We then
assess the performance of GNS by predicting granular column collapse. GNS
accurately predicts flow dynamics for column collapses with different aspect
ratios unseen during training. GNS is hundreds of times faster than
high-fidelity numerical simulators. The model also generalizes to domains much
larger than the training data, handling more than twice the number of particles
than it was trained on. | Yongjin Choi, Krishna Kumar | 2023-05-09T07:28:12 | http://arxiv.org/abs/2305.05218v2 | # Graph Neural Network-based surrogate model for granular flows
###### Abstract
Accurate simulation of granular flow dynamics is crucial for assessing various geotechnical risks, including landslides and debris flows. Granular flows involve a dynamic rearrangement of particles exhibiting complex transitions from solid-like to fluid-like responses. Traditional continuum and discrete numerical methods are limited by their computational cost in simulating large-scale systems. Statistical or machine learning-based models offer an alternative. Still, they are largely empirical, based on a limited set of parameters. Due to their permutation-dependent learning, traditional machine learning-based models require huge training data to generalize. To resolve these problems, we use graph neural network, a state-of-the-art machine learning architecture that learns local interactions. Graphs represent the state of dynamically changing granular flows and the interaction laws, such as energy and momentum exchange between grains. We develop a graph neural network-based simulator (GNS) that takes the current state of granular flow and predicts the next state using Euler explicit integration by learning the local interaction laws. We train GNS on different granular trajectories. We then assess the performance of GNS by predicting granular column collapse. GNS accurately predicts flow dynamics for column collapses with different aspect ratios unseen during training. GNS is hundreds of times faster than high-fidelity numerical simulators. The model also generalizes to domains much larger than the training data, handling more than twice the number of particles than it was trained on.
keywords: graph neural network, learned physics simulator, granular column collapse, surrogate model +
Footnote †: journal: Computers and Geotechnics
## 1 Introduction
Landslides cause extensive material displacement and significant infrastructure damage. Accurate modeling of granular flow runout is crucial to understanding the impact of landslides. Numerical methods, such as particle-based and continuum approaches, are often employed to assess landslide runouts. Particle-based approaches, like the Discrete Element Method (DEM) (Staron and Hinch, 2005; Kermani et al., 2015; Kumar et al., 2017), can model grain-grain interactions but are limited to representative elemental volumes. Traditional continuum approaches, such as the Finite Element Method, can predict the initiation of such failures but suffer from mesh distortions when capturing runout dynamics. Hybrid Eulerian-Lagrangian approaches like the Material Point Method (MPM) (Mast et al., 2014; Kumar et al., 2017) can simulate large-deformation flows without undergoing mesh distortions. However, the hybrid nature of MPM requires tracking both the grid and the material
points, which is computationally expensive. Multiple full-scale simulations are necessary for a comprehensive evaluation of runout hazard scenarios. Similarly, a back analysis to estimate material parameters requires a broad parametric sweep involving hundreds to thousands of simulations. However, current state-of-the-art numerical methods are restricted to, at most, a few full-scale simulations, limiting our ability in scenario testing or back analysis.
An alternative to numerical simulations is the development of statistical or machine learning models to evaluate landslide risks. These surrogate models build correlations between landslide risks and their influencing factors through simple empirical correlation without considering the complex granular flow dynamics. Several studies adopt probabilistic approaches, such as Monte Carlo simulation and Bayesian analysis, to evaluate the landslide runout distance based on factors including topology and geology (Gao et al., 2021; Zeng et al., 2021; Sun et al., 2021; Zhao et al., 2022). Machine learning models can predict the travel distance and potential path of granular flows based on the geometry and ground properties (Durante and Rathje, 2021; Ju et al., 2022; Yang and Hambleton, 2021). Although researchers have been able to correlate the runout of granular flow based on statistical or data-driven techniques, these techniques do not explicitly consider granular flow dynamics--the actual physics governing the flow behavior. Thus, due to a lack of physics, these statistical models do not generalize outside their training range in modeling other boundary conditions or geometry.
Building surrogate models that replicate the entire granular flow dynamics is challenging. The surrogate model must capture complex behaviors involving highly non-linear, static, collisional, and frictional dissipation regimes (Soga et al., 2016). Learning fundamental interaction laws is crucial for generalizing beyond the training datasets. Techniques like max-pooling in convolutional neural networks learn spatially invariant behavior, i.e., they learn features irrespective of their spatial location. However, CNNs are primarily limited to mesh-based systems with fixed neighbors.
Granular flow is a dynamic system where neighbor interactions evolve throughout the runout (Lajeunesse et al., 2005; Zhang et al., 2016; Soga et al., 2016). A traditional Multi-Layer Perceptron (MLP) could model such a dynamic system. However, generalizing MLPs requires an exhaustive dataset to overcome combinatorial dependence, i.e., the outputs of the models depend on the order of the inputs (Battaglia et al., 2018; Haeri and Skoniczny, 2022). Unreasonably large training datasets are needed to map the entire parameter space of particle arrangements and dynamics.
To address these limitations, we utilize graph neural networks (GNNs), a state-of-the-art machine learning architecture that enables permutation invariant learning (Battaglia et al., 2016, 2018; Sanchez-Gonzalez et al., 2020), to develop a data-driven surrogate model for granular flow dynamics. At any given time, the physical state of the granular system is represented as a graph. We develop a GNN-based Simulator (GNS) that operates on the graph to learn the fundamental interaction law. We demonstrate the capability of GNS in replicating the granular flow dynamics by studying the collapse of a granular column. Granular column collapse is a simple physical experiment that captures the overall dynamics of large-scale runouts. GNS, trained on granular flow trajectories, successfully predicts the runout dynamics of column collapse outside its training range and generalizes to upscaled domain sizes.
## 2 Method
This section describes the individual components of GNS: graphs, graph neural networks (GNNs), and message passing.
### Graph Neural Networks and Message Passing
#### 2.1.1 Graphs
Graphs can represent interactions in physical systems (Battaglia et al., 2016; Sanchez-Gonzalez et al., 2020). We represent the granular media as a graph \(G=(\mathbf{V},\mathbf{E})\) consisting of a set of vertices (\(\mathbf{v}_{i}\in\mathbf{V}\)) representing the soil grains or aggregation of grains and edges (\(\mathbf{e}_{i,j}\in\mathbf{E}\)) connecting a pair of vertices (\(\mathbf{v}_{i}\) and \(\mathbf{v}_{j}\)) representing the interaction between the grains. Consider an example involving interaction between grains in a box (see fig. 1). We encode the state of the physical system, such as the kinematics of grains and their interaction (fig. 1a and fig. 1d), as a graph (fig. 1b and fig. 1c). The vertices describe the position and velocity of the grains, and the edges describe the directional interaction between them, shown as arrows in fig. 1b and fig. 1c. The state of the grain \(i\) is represented as a vertex feature vector \(\mathbf{v}_{i}\). The vertex feature vector includes velocities, mass, and distance to the boundary. The edge feature vector \(\mathbf{e}_{i,j}\) includes information about the interaction between grains \(i\) and \(j\) such as the relative distance between the grains. Thus, we can store and process the state of granular bodies and their interactions as graphs.
Graphs offer a permutation-invariant form of encoding data, where the interaction between vertices is independent of the order of vertices or their position in Euclidean space. As graphs represent the interactions between grains as edge connections, graphs are permutation invariants. For example, by storing the relative positional information in \(\mathbf{e}_{i,j}\), rather than the absolute position, machine learning models operating on these networks learn the interaction behavior of different relative distances between grains. Therefore, graphs can efficiently represent the physical state of granular flow involving multi-grain interactions.
#### 2.1.2 Graph neural networks (GNNs)
GNN takes a graph \(G=(\mathbf{V},\mathbf{E})\) as an input, computes properties and updates the graph, and outputs an updated graph \(G^{\prime}=(\mathbf{V}^{\prime},\mathbf{E}^{\prime})\) with an identical structure, where \(\mathbf{V}^{\prime}\) and \(\mathbf{E}^{\prime}\) are the set of updated vertex and edge features (\(\mathbf{v}^{\prime}_{i}\) and \(\mathbf{e}^{\prime}_{i,j}\)). Message passing is the process of updating the graph by propagating information through it.
In the grains-in-a-box example, the GNN first takes the original graph \(G=(\mathbf{V},\mathbf{E})\) (fig. 1b) that describes the current state of the physical system (\(\mathbf{X}_{t}\)). The GNN then updates the state of the physical system through message passing, which models the exchange of energy and momentum between the grains, and returns an updated graph \(G^{\prime}=(\mathbf{V}^{\prime},\mathbf{E}^{\prime})\) (fig. 1c). We decode \(G^{\prime}\), the output of GNN, to extract information related to the future state of the physical system (\(\mathbf{X}_{t+1}\)), such as the next position or acceleration of the grains (fig. 1d).
#### 2.1.3 Message passing
Message passing consists of three operations: message construction (eq. (1)), message aggregation (eq. (2)), and the vertex update function (eq. (3)).
\[\mathbf{e}^{\prime}_{i,j}=\phi_{\mathbf{\Theta}_{\phi}}\left(\mathbf{v}_{i},\mathbf{v}_{j}, \mathbf{e}_{i,\ j}\right) \tag{1}\]
\[\bar{\mathbf{v}}_{i}=\Sigma_{j\in N(i)}\ \mathbf{e}^{\prime}_{i,j} \tag{2}\]
\[\mathbf{v}^{\prime}_{i}=\gamma_{\mathbf{\Theta}_{\gamma}}\left(\mathbf{v}_{i},\bar{\mathbf{v}}_{ i}\right) \tag{3}\]
The subscript \(\mathbf{\Theta}_{\phi}\) and \(\mathbf{\Theta}_{\gamma}\) represent a set of learnable parameters in each computation. The message construction function \(\phi_{\mathbf{\Theta}_{\phi}}\) (eq. (1)) takes the feature vectors of the receiver and sender vertices (\(\mathbf{v}_{i}\) and \(\mathbf{v}_{j}\)) and the feature vector of the edge connecting them (\(\mathbf{e}_{i,\ j}\)) and returns an updated edge feature vector \(\mathbf{e}^{\prime}_{i,j}\) as the output. \(\phi_{\mathbf{\Theta}_{\phi}}\) is a matrix operation including the learnable parameter \(\mathbf{\Theta}_{\phi}\). The updated edge feature vector \(\mathbf{e}^{\prime}_{i,j}\) is the message sent from vertex \(j\) to \(i\). Figure 1(a) shows an example of constructing messages on edges directed to vertex \(0\) originating from vertices \(1\), \(2\), and \(3\) (\(\mathbf{e}^{\prime}_{0,\ 1}\), \(\mathbf{e}^{\prime}_{0,\ 2}\), \(\mathbf{e}^{\prime}_{0,\ 3}\)). Here, we define the message construction function \(\phi_{\mathbf{\Theta}_{\phi}}\) as \(((\mathbf{v}_{i}+\mathbf{v}_{j})\times\mathbf{e}_{i,\ j})\times\mathbf{\Theta}_{\phi}\). The updated feature vector \(\mathbf{e}^{\prime}_{0,\ 1}\) is computed as \(((\mathbf{v}_{0}+\mathbf{v}_{1})\times\mathbf{e}_{0,\ 1})\times\mathbf{\Theta}_{\phi}\), where \(\mathbf{v}_{0}\) and \(\mathbf{v}_{1}\) are the receiver and sender vertex feature vectors, and \(\mathbf{e}_{0,\ 1}\) is their edge feature vector. Suppose we assume all values of \(\mathbf{\Theta}_{\phi}\) are \(1.0\) for simplicity, we obtain \(\mathbf{e}^{\prime}_{0,\ 1}=(([1,\ 0,\ 2])+[1,\ 3,\ 2])\times[2,\ 1,\ 0]^{T})\times 1=[4,\ 3,\ 0]\). Similarly, we can compute the messages \(\mathbf{e}^{\prime}_{0,\ 2}=[0,\ 3,\ 9]\) and \(\mathbf{e}^{\prime}_{0,\ 3}=[3,\ 4,\ 9]\).
The next step in message passing is the message aggregation \(\Sigma_{j\in N(i)}\) (eq. (2)), where \(N(i)\) is the set of sender vertices \(j\) related to vertex \(i\)). It collects all the messages directing to vertex \(i\) and aggregates those into a single vector with the same dimension as the aggregated message (\(\bar{\mathbf{v}}_{i}\)). The aggregation rule can be element-wise vector summation or averaging; hence it is a permutation invariant computation. In fig. 1(a), the aggregated message \(\bar{\mathbf{v}}_{0}=[7,\ 10,\ 18]\) is the element-wise summation of the messages directing to vertex \(0\) as \(\bar{\mathbf{v}}_{0}=\mathbf{e}^{\prime}_{0,\ 1}+\ \mathbf{e}^{\prime}_{0,\ 2}+\ \mathbf{e}^{ \prime}_{0,\ 3}\).
The final step of the message passing is updating vertex features using eq. (3). It takes the aggregated message (\(\bar{\mathbf{v}}_{i}\)) and the current vertex feature vector \(\mathbf{v}_{i}\), and returns an updated vertex feature vector \(\mathbf{v}^{\prime}_{i}\), using predefined vector operations including the learnable parameter \(\mathbf{\Theta}_{\gamma}\). Figure 1(b) shows an example of the update at vertex \(0\). Here, we define the update function \(\gamma_{\mathbf{\Theta}_{\gamma}}\) as \(\mathbf{\Theta}_{\gamma}\left(\mathbf{v}_{i}+\bar{\mathbf{v}}_{i}\right)\). The updated feature vector \(\mathbf{v}^{\prime}_{0}\) is computed as \(\mathbf{\Theta}_{\gamma}\left(\mathbf{v}_{0}+\bar{\mathbf{v}}_{0}\right)\).
Figure 1: An example of a graph and graph neural network (GNN) that process the graph (modified from Battaglia et al. (2018)): (a) A state of the current physical system (\(\mathbf{X}_{t}\)) where the grains are bouncing in a box boundary; (b) Graph representation of the physical system (\(G\)). There are three vertices representing grains and six edges representing their directional interaction shown as arrows; (c) The updated graph (\(G^{\prime}\)) that GNN outputs through message passing; (d) The predicted future state of the physical system (\(\mathbf{X}_{t+1}\)) (i.e., the positions of the grains at the next timestep) decoded from the updated graph.
Assuming all parameters in \(\mathbf{\Theta}_{\gamma}\) are \(1.0\) for simplicity, we obtain \(\mathbf{v}_{0}^{\prime}=\mathbf{v}_{0}+\bar{\mathbf{v}}_{0}=[1,\ 0,\ 2]+[7,\ 10,\ 18]=[8,\ 10,\ 20]\). Similarly, we update the other vertex features \((\mathbf{v}_{1}^{\prime},\ \mathbf{v}_{2}^{\prime},\ \mathbf{v}_{3}^{\prime})\).
After message passing, the graph vertex and edge features \((\mathbf{v}_{i}\) and \(\mathbf{e}_{i,\ j})\) are updated to \(\mathbf{v}_{i}^{\prime}\) and \(\mathbf{e}_{i,\ j}^{\prime}\). The GNN may include multiple message passing steps to propagate the information further through the network.
Unlike the example shown above, where we assume a constant value of \(1.0\) for the learnable parameters, in a supervised learning environment, the optimization algorithm will find a set of the best learnable parameters \((\mathbf{\Theta}_{\phi},\mathbf{\Theta}_{\gamma})\) in the message passing operation.
### Graph Neural Network-based Simulator (GNS)
In this study, we use GNN as a surrogate simulator to model granular flow behavior. Figure 3 shows an overview of the general concepts and structure of the GNN-based simulator (GNS) proposed by Sanchez-Gonzalez et al. (2020). Consider a granular flow domain represented as material points (fig. 3a), which represent the collection of grains. In GNS, we represent the physical state of the granular domain at time \(t\) with a set of \(\mathbf{x}_{i}^{t}\) describing the state and properties of each material point. The GNS takes the current state of the granular flow \(\mathbf{x}_{t}^{i}\in\mathbf{X}_{t}\) and predicts its next state \(\mathbf{x}_{i+1}^{i}\in\mathbf{X}_{t+1}\) (fig. 3a). The GNS consists of two components: a parameterized function approximator \(d_{\Theta}\) and an updater function (fig. 3b). The function approximator \(d_{\Theta}\) takes \(\mathbf{X}_{t}\) as an input and outputs dynamics information \(\mathbf{y}_{i}^{t}\in\mathbf{Y}_{t}\). The updater then computes \(\mathbf{X}_{t+1}\) using \(\mathbf{Y}_{t}\) and \(\mathbf{X}_{t}\). Figure 3c shows the details of \(d_{\Theta}\) which consists of an encoder, a processor, and a decoder. The encoder (fig. 3-c1) takes the state of the system \(\mathbf{X}^{t}\) and embeds it into a latent graph \(G_{0}=(\mathbf{V}_{0},\ \mathbf{E}_{0})\) to represent the relationships between material points. The vertices \(\mathbf{v}_{i}^{t}\in\mathbf{V}_{0}\) contain latent information of the current state of the material point, and the edges \(\mathbf{e}_{i,j}^{t}\in\mathbf{E}_{0}\) contain latent information of the pair-wise relationship between material points. Next, the processer (fig. 3-c2) converts the input graph \(G_{0}\) to the output graphs \(G_{M}\) through \(M\) stacks of message-passing GNN (\(G_{0}\to G_{1}\to\cdots\to G_{M}\)). The message passing computes the interaction between vertices. Finally, the decoder (fig. 3-c3) extracts the dynamics of the points \((\mathbf{Y}^{t})\) from \(G_{M}\), such as the acceleration of the physical system. The entire simulation (fig. 3a) involves running GNS surrogate model through \(K\) timesteps predicting from the initial state
to \(\mathbf{X}_{K}\left(\mathbf{X}_{0},\ \mathbf{X}_{1},\ \dots,\ \mathbf{X}_{K}\right)\), updating at each step (\(\mathbf{X}_{t}\rightarrow\mathbf{X}_{t+1}\)). We call this successive prediction from GNS the "rollout".
In the following sections, we explain the details of our input \(\mathbf{X}^{t}\) (fig. 3a), the encoder, processor, and decoder in \(d_{\Theta}\) (fig. 3c), and how we compute \(\mathbf{X}^{t+1}\) from \(\mathbf{X}^{t}\) using the GNS updater function (fig. 3b).
#### 2.2.1 Input
The input to the GNS, \(\mathbf{x}_{i}^{t}\in\mathbf{X}^{t}\) (eq. (4)), is a vector consisting of the current material point position \(\mathbf{p}_{i}^{t}\), the material point velocity context \(\mathbf{\hat{p}}_{i}^{\leq t}\), information on boundaries \(\mathbf{b}_{i}^{t}\), and material point type embedding \(\mathbf{f}\). The current state \(\mathbf{x}_{i}^{t}\) will be used to construct vertex feature (\(\mathbf{v}_{i}^{t}\)) (eq. (6)).
\[\mathbf{x}_{i}^{t}=\left[\mathbf{p}_{i}^{t},\ \mathbf{p}_{i}^{\leq t},\ \mathbf{b}_{i}^{t},\ \mathbf{f}\right] \tag{4}\]
The velocity context \(\mathbf{\hat{p}}_{i}^{\leq t}\) includes the current and previous material point velocities for \(n\) timesteps \(\left[\mathbf{\hat{p}}_{i}^{t-n},\cdots,\ \mathbf{\hat{p}}_{i}^{t}\right]\) with \(n+1\) velocities. We use \(n=4\) to include sufficient velocity context in the vertex feature \(\mathbf{v}_{i}^{t}\). Sanchez-Gonzalez et al. (2020) show that having \(n>1\) significantly improves the model performance. We compute the velocities using the finite difference of the position sequence (i.e., \(\mathbf{\hat{p}}_{i}^{t}=\left(\mathbf{p}_{i}^{t}-\mathbf{p}_{i}^{t-1}\right)/\Delta t\)). \(\mathbf{b}_{i}^{t}\) is boundary information. For a 2D problem, \(\mathbf{b}_{i}^{t}\) has four components, each indicating the distance between material points and the four walls. We normalize \(\mathbf{b}_{i}^{t}\) by the connectivity radius \(R\) which defines the interaction zone, explained in the next section, and restrict it between -1.0 to 1.0. \(\mathbf{b}_{i}^{t}\) is used to evaluate boundary interaction for a material point. \(\mathbf{f}\) is a vector embedding describing a material point type.
We define the interaction between material points \(i\) and \(j\) as \(\mathbf{r}_{i,\ j}^{t}\) using the distance and displacement of the material points in the current timestep (see eq. (4)). The former reflects
Figure 3: The structure of the graph neural network (GNN)-based physics simulator (GNS) for granular flow (modified from Sanchez-Gonzalez et al. (2020)): (a) The entire simulation procedure using the GNS, (b) The computation procedure of GNS and its composition, (c) The computation procedure of the parameterized function approximator \(d_{\Theta}\) and its composition.
the level of interaction, and the latter reflects its spatial direction. \(\mathbf{r}_{i,\ j}^{t}\) will be used to construct edge features (\(\mathbf{e}_{i,j}^{t}\)).
\[\mathbf{r}_{i,j}^{t}=\left[\left(\mathbf{p}_{i}^{t}-\mathbf{p}_{j}^{t}\right),\ \|\mathbf{p}_{i}^{t}-\mathbf{p}_{j}^{t}\|\right] \tag{5}\]
#### 2.2.2 Encoder
The vertex and edge encoders (\(\varepsilon_{\mathbf{\Theta}}^{v}\) and \(\varepsilon_{\mathbf{\Theta}}^{e}\)) convert \(\mathbf{x}_{i}^{t}\) and \(\mathbf{r}_{i,\ j}^{t}\) into the vertex and edge feature vectors (\(\mathbf{v}_{i}^{t}\) and \(\mathbf{e}_{i,j}^{t}\)) (eq. (6)) and embed them into a latent graph \(G_{0}=(\mathbf{V}_{0},\ \mathbf{E}_{0})\), \(\mathbf{v}_{i}^{t}\in\ \mathbf{V}_{0}\), \(\mathbf{e}_{i,j}^{t}\in\ \mathbf{E}_{0}\).
\[\mathbf{v}_{i}^{t}=\varepsilon_{\mathbf{\Theta}}^{v}\left(\mathbf{x}_{i}^{t}\right),\ \mathbf{e}_{i,j}^{t}=\varepsilon_{\mathbf{\Theta}}^{e}\left(\mathbf{r}_{i,j}^{t}\right) \tag{6}\]
We use a two-layered 128-dimensional multi-layer perceptron (MLP) for the \(\varepsilon_{\mathbf{\Theta}}^{v}\) and \(\varepsilon_{\mathbf{\Theta}}^{e}\). The MLP and optimization algorithm search for the best candidate for the parameter set \(\mathbf{\Theta}\) that estimates a proper way of representing the physical state of the material points and their relationship which will be embedded into \(G_{0}\).
The edge encoder \(\varepsilon_{\mathbf{\Theta}}^{v}\) uses \(\mathbf{x}_{i}^{t}\) (eq. (4)) without the current position of the material point (\(\mathbf{p}_{i}^{t}\)), but with its velocities (\(\mathbf{\dot{\mathbf{p}}}_{i}^{\pm t}\)), as velocity governs the momentum, and the interaction dynamics is independent of the absolute position of the material points. Rubanova et al. (2022) confirmed that including position causes poorer model performance. We only use \(\mathbf{p}_{i}^{t}\) to predict the next position \(\mathbf{p}_{i}^{t+1}\) based on the predicted velocity \(\mathbf{\dot{\mathbf{p}}}_{i}^{t+1}\) using Explicit Euler integration.
We consider the interaction between two material points by constructing edges between them all pairs of vertices located within a certain distance called connectivity radius \(R\) (see the shaded circular area in fig. 3b). The connectivity radius is a critical hyperparameter that governs how effectively the model learns the local interaction. \(R\) should be sufficiently large to include the local interaction between material points and capture the simulation domain's global dynamics.
#### 2.2.3 Processor
The processor performs message passing (based on eq. (1) to eq. (3)) on the initial latent graph (\(G_{0}\)) from the encoder for \(M\) times (\(G_{0}\to G_{1}\rightarrow\cdots\to G_{M}\)) and returns a final updated graph \(G_{M}\). We use two-layered 128-dimensional MLPs for both the message construction function \(\phi_{\mathbf{\Theta}_{\phi}}\) and vertex update function \(\gamma_{\mathbf{\Theta}_{r}}\), and element-wise summation for the message aggregation function \(\Sigma_{j\in N(i)}\) in eq. (1) to eq. (3). We set \(M=10\) to ensure sufficient message propagation through the network. These stacks of message passing model information propagation through the network of material points.
#### 2.2.4 Decoder
The decoder \(\delta_{\mathbf{\Theta}}^{v}\) extracts the dynamics \(\mathbf{y}_{i}^{t}\in\mathbf{Y}^{t}\) of the material points from the vertices \(\mathbf{v}_{i}^{t\prime}\) (eq. (7)) using the final graph \(G_{M}\). We use a two-layered 128-dimensional MLP for \(\delta_{\mathbf{\Theta}}^{v}\), which learns to extract the relevant dynamics for material points from \(G_{M}\).
\[\mathbf{y}_{i}^{t}=\delta_{\mathbf{\Theta}}^{v}\left(\mathbf{v}_{i}^{t\prime}\right) \tag{7}\]
#### 2.2.5 Update
We use the dynamics \(\mathbf{y}_{i}^{t}\) to predict the velocity and position of the material points at the next timestep (\(\mathbf{\dot{p}}_{i}^{t+1}\) and \(\mathbf{p}_{i}^{t+1}\)) based on Euler integration (eq. (8) and eq. (9)), which makes \(\mathbf{y}_{i}^{t}\) analogous to acceleration \(\mathbf{\ddot{p}}_{i}^{t}\).
\[\mathbf{\dot{p}}_{i}^{t+1}=\mathbf{\dot{p}}_{i}^{t}+\mathbf{y}_{i}^{t}\Delta\mathrm{t} \tag{8}\]
\[\mathbf{p}_{i}^{t+1}=\mathbf{p}_{i}^{t}+\mathbf{\dot{p}}_{i}^{t+1}\Delta\mathrm{t} \tag{9}\]
Based on the new position and velocity of the material points, we update \(\mathbf{x}_{i}^{t}\in\mathbf{X}^{t}\) (eq. (4)) to \(\mathbf{x}_{i}^{t+1}\in\mathbf{X}^{t+1}\). The updated physical state \(\mathbf{X}^{t+1}\) is then used to predict the position and velocity for the next timestep.
The updater imposes inductive biases, such as an inertial frame, on GNS to force it only to learn the interaction dynamics, improving learning efficiency. A traditional neural network learns both the update scheme and the interaction dynamics:
\[p^{t+1}=NN(p^{t},v^{t})\,. \tag{10}\]
Whereas, using an inertial prior, we force the GNS only to learn the interaction dynamics, by hardcoding the update function:
\[p^{t+1}=p^{t}+v^{t}\cdot\Delta t+NN(p^{t},v^{t})\,. \tag{11}\]
Nevertheless, GNS does not directly predict the next position from the current position and velocity (i.e., \(\mathbf{p}_{i}^{t+1}=GNS\left(\mathbf{p}_{i}^{t},\ \mathbf{\dot{p}}_{i}^{t}\right)\)) which has to learn the static motion and inertial motion. Instead, it uses (1) the inertial prior (eq. (8)) where the prediction of the next velocity \(\mathbf{\dot{p}}_{i}^{t+1}\) should be based on the current velocity \(\mathbf{\dot{p}}_{i}^{t}\) and (2) the static prior (eq. (9)) where the prediction of the next position \(\mathbf{p}_{i}^{t+1}\) should be based on the current position \(\mathbf{p}_{i}^{t}\). These make GNS focus on learning unknown dynamics by hardcoding known physics. Since GNS learns the dynamics of material points through interactions independent of absolute position, GNS is generalizable to other geometric conditions.
## 3 Training and Evaluation
We now train the GNS to predict granular column collapse. This section explains how we generate training data, details of the training process, and how we evaluate the performance of the GNS.
### Material Point Method
We utilize the Material Point Method (MPM) to generate the GNS training dataset of granular flow simulations. The MPM is a hybrid Eulerian-Lagrangian approach designed for modeling large-deformation flows (Soga et al., 2016). In the MPM, a continuum body is represented by individual material points that traverse a static background grid. The governing equation is solved at the nodes, and the updated velocity field is subsequently mapped back to the material points. We employ the position information stored in these
material points to construct the current state \(\mathbf{X}^{t}\) in the GNS. For more information on MPM refer to Soga et al. (2016).
### Datasets
The training datasets include 26 granular flow trajectories of square-shaped granular mass in a two-dimensional box boundary simulated by the MPM explicit time integration method using the CB-Geo MPM code (Kumar et al., 2019). Each simulation has a different initial configuration regarding the size of the square granular mass, position, and velocity. Table 1 presents the details of the training dataset generated using the MPM simulation. The datasets are published on DesignSafe (Kumar and Choi, 2023). A shows all the training trajectories with different initial configurations and initial velocities.
We also create the validation datasets to check if the model experiences overfitting. The datasets include seven trajectories of randomly picked rectangular-shaped granular mass with different initial configurations not included in the training datasets.
### Training
Our GNS has a learnable parameter set \(\Theta\). We train \(\Theta\) to minimize the loss calculated as the mean squared error (MSE) between \(\mathbf{y}^{t}_{i}\) (predicted proxy-acceleration) and the ground truth acceleration \(\mathbf{\vec{p}}^{t}_{i}\) for all material points \(i=1,\ 2,\ \dots,\ N\) as shown in eq. (12) based on gradient (\(\nabla loss_{\Theta}\))-based optimizer, Adam (Kingma and Ba, 2014).
\[loss_{\Theta}=\frac{1}{n}\sum_{i=1}^{N}\left(\mathbf{y}^{t}_{i}-\mathbf{\vec{p}}^{t}_ {i}\right)^{2} \tag{12}\]
For training the GNS, we have to set hyperparameters to learn the flow behavior from the training trajectories properly. The first key hyperparameter is the connectivity radius \(R\)
\begin{table}
\begin{tabular}{l l l} \hline \hline \multicolumn{2}{c}{Property} & \multicolumn{1}{c}{Description} \\ \hline Simulation boundary & \multicolumn{1}{c}{1.0\(\times\)1.0 m} \\ Mesh size & \multicolumn{1}{c}{0.025\(\times\)0.025 m} \\ Material points per cell & \multicolumn{1}{c}{16} \\ Granular mass geometry & \multicolumn{1}{c}{0.2\(\times\)0.2 m and 0.3\(\times\)0.3 m} \\ Simulation duration (timesteps) & \multicolumn{1}{c}{(each includes 1,024 and 2,304 particles)} \\ \hline \multirow{6}{*}{Material property} & Model & Mohr-Coulomb \\ & Density & 1,800 \(kg/m^{3}\) \\ & Youngs modulus & 2 GPa \\ & Poisson ratio & 0.3 \\ & Friction angle & 30\({}^{\circ}\) \\ & Cohesion & 0.1 kPa \\ & Tension cutoff & 0.05 kPa \\ \hline \hline \end{tabular}
\end{table}
Table 1: Details of the Material Point Method (MPM) simulation geomaterials and properties used for generating the training datasets.
which governs the model's capacity to learn the interactions of material points. We set \(R=0.030\) m which includes about 9 to 10 material points along the diameter. The circular area defined by \(R\) can incorporate approximately 70 material points inside. Another important hyperparameter is the Gaussian noise value for perturbing the ground truth position in the training trajectories. Since every predicted position for each timestep is based on the previous prediction, which includes a prediction error, the simulation over the large timesteps is subjected to an exponential error accumulation. To avoid this issue, we train the model on input positions with Gaussian noise that emulates the prediction error made by a one-step prediction (\(\mathbf{X}_{t}\rightarrow\mathbf{X}_{t+1}\)). The inclusion of noise in training leads to more rigorous long-rollout predictions.
We use the learning rate (\(lr\)) decay with the initial value of \(10^{-4}\) and decay rate of 0.1 (\(lr=10^{-4}\times 0.1^{step/5\times 10^{6}}\)) for more stable convergence. We use the batch size of two, i.e., \(\mathbf{X}_{t}\) from two different trajectories are used simultaneously in updating the learnable parameters. For information on the scalability of the GNS algorithm, refer to Kumar and Vantassel (2022).
We investigate if the model experiences overfitting by plotting the loss history (fig. 4) for the training and validation datasets evaluated for every 10K training steps. The training and validation losses show a drastic decrease until 2M steps. After that, the validation loss tends to remain slightly larger than the training loss. Figure 4 shows no overfitting during the training.
### GNS prediction of granular flows
We trained the GNS to predict the collapse of a granular column (as studied by Lajeunesse et al. (2004); Lube et al. (2005)). Figure 5 shows the granular column collapse experiments to evaluate its ability to replicate granular flow dynamics. Granular column collapse is a simple physical experiment that captures the transient response of granular flow dynamics. The experiment involves the collapse of a granular column of initial height \(H_{0}\) and length \(L_{0}\) on a flat surface due to gravity. As the gate holding the column is removed, the granular
Figure 4: Evolution of GNS loss in training and validation with epochs.
material destabilizes, resulting in a runout. We measure the final runout deposit with the final height \(H_{f}\) and runout \(L_{f}\).
The runout of the granular column is governed by the initial aspect ratio (\(a=H_{0}/L_{0}\)) (Staron and Hinch, 2005; Kumar, 2015). For short columns (\(a\lesssim 2\)) (fig. 5a), the soil mass fails along the flanks of the column above a well-defined failure surface (dashed line). The soil mass beneath the failure surface remains in static equilibrium throughout the collapse forming a truncated conical shape. With the increase in aspect ratio, the portion of the sliding mass above the failure surface increase, and the static part becomes smaller, forming a conical shape. For tall columns (\(a\gtrsim 2\)) (fig. 5b), the majority of the soil mass is involved in the collapse, and it initially experiences a free fall due to gravitational acceleration. As the falling mass reaches the failure surface, the vertical kinetic energy is converted to horizontal acceleration, resulting in a longer runout distance than the short column (fig. 5a). In addition, researchers (Kumar, 2015; Staron and Hinch, 2005; Kermani et al., 2015; Utili et al., 2015) observed a transition zone where the flow dynamics change from short to tall columns. The normalized runout (\(\left(L_{f}-L_{0}\right)/L_{0}\)) of a granular column is only a function of its aspect ratio (\(a\)). The normalized runout represents how far the granular mass runs out before reaching the final deposit state compared to the initial length of the column. Short columns show a linear relationship with the initial aspect ratio. In contrast, tall columns have a power-law relationship with the initial aspect ratio.
The GNS was trained only on the aspect ratio of 1.0. However, we evaluate its performance in predicting the runout dynamics of other aspect ratios by comparing the GNS predictions with the MPM simulations. Table 2 presents the test cases for evaluating GNS performance.
## 4 Results and Discussions
We evaluate the GNS predictions of granular column collapse against the MPM simulations in terms of the (1) geometry of sliding mass, (2) evolution of runout and height with time, and (3) energy evolution during the collapse. Figure 6 shows the normalized runout (\(\left(L_{f}-L_{0}\right)/L_{0}\)) predictions of GNS for different aspect ratios in comparison with MPM. \(L_{f}\) is the distance from the left wall to the material point that runs out the farthest, as shown in fig. 5. Previous research observed a transition zone for the relationship between the normalized runout and aspect ratio that distinguishes short-column from tall-column dynamics.
Figure 5: Schematics for the granular column collapse configuration and its behavior on the aspect ratio.
For both GNS and MPM, we observe the transition around an initial aspect ratio \(a=1.2\) (fig. 6). Table 3 summarizes the errors between GNS predictions and MPM simulations for different aspect ratios. In general, the GNS runout prediction is within 5% of the MPM runout estimate. Figure 6 suggests that the GNS successfully captures the dependence of the final runout with the initial aspect ratio, including the transition from the short to the tall column.
### GNS Predictions of Granular Flow Dynamics
#### 4.1.1 Short Column
We now evaluate the GNS rollout (prediction) of the granular flow dynamics with time for a short column (\(a=0.8\)). Figure 7 shows the time evolution of granular flow for the
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Test case} & \multirow{2}{*}{\(H_{0}\times L_{0}\)} & Duration & Simulation & Number of \\ & & & (timesteps) & boundary & material points \\ \hline Short & & & & X: 0 to 1.0 & \\ columns & \(a=0.5\) & 0.2 \(\times\) 0.4 & 400 & \begin{tabular}{c} X: 0 to 1.0 \\ Y: 0 to 0.5 \\ Y: 0 to 1.0 \\ Y: 0 to 0.5 \\ Y: 0 to 0.5 \\ Y: 0 to 0.5 \\ \end{tabular} & 1956 \\ & \(a=0.8\) & \(0.24\times 0.30\) & 400 & \begin{tabular}{c} X: 0 to 1.0 \\ Y: 0 to 1.0 \\ Y: 0 to 0.5 \\ Y: 0 to 0.5 \\ \end{tabular} & 1824 \\ & \(a=1.0\) & 0.30 \(\times\) 0.30 & 400 & \begin{tabular}{c} X: 0 to 1.0 \\ Y: 0 to 0.5 \\ Y: 0 to 0.5 \\ \end{tabular} & 2304 \\ \hline Tall & & & & X: 0 to 1.0 & \\ columns & \(a=2.0\) & 0.30 \(\times\) 0.15 & 400 & \begin{tabular}{c} X: 0 to 1.0 \\ Y: 0 to 0.5 \\ Y: 0 to 1.0 \\ Y: 0 to 0.5 \\ \end{tabular} & 1152 \\ & \(a=3.0\) & 0.36 \(\times\) 0.12 & 400 & \begin{tabular}{c} X: 0 to 1.0 \\ Y: 0 to 0.5 \\ \end{tabular} & 1106 \\ & \(a=4.0\) & 0.35 \(\times\) 0.075 & 400 & \begin{tabular}{c} X: 0 to 1.0 \\ Y: 0 to 0.5 \\ \end{tabular} & 576 \\ \hline Up-scaled & \(a=0.8\) & 0.36 \(\times\) 0.45 & 500 &
\begin{tabular}{c} X: 0 to 1.5 \\ Y: 0 to 1.0 \\ \end{tabular} & 5120 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Granular column collapse simulation cases for testing GNS.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Aspect ratio, \(a\)} & \multicolumn{2}{c}{Normalized runout} & \multirow{2}{*}{Runout error (\%)} \\ \cline{2-3} & MPM & GNS & \\ \hline
0.5 & 0.831 & 0.811 & 2.48 \\
0.8 & 1.444 & 1.445 & 0.06 \\
1.0 & 2.071 & 2.152 & 3.78 \\
2.0 & 3.892 & 3.682 & 5.70 \\
3.0 & 5.620 & 5.341 & 5.23 \\
4.0 & 5.753 & 6.070 & 5.21 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Normalized runout from MPM and GNS depending on aspect ratios and corresponding prediction error.
short column collapse. We use a normalized time (\(t/\tau_{c}\)) to compare the flow evolution, where \(t\) is physical time, and \(\tau_{c}\) is the critical time defined as the time required for the flow to fully mobilize. \(\tau_{c}\) is defined as \(\sqrt{H_{0}/g}\), where \(g\) is the gravitational acceleration. In fig. 7, the collapse shows three stages. First, the flow is mobilized by the failure of the flank and reaches full mobilization around \(t/\tau_{c}=1.0\). The majority of the runout occurs until \(t/\tau_{c}=2.5\). Beyond \(t/\tau_{c}>2.5\), the spreading decelerates due to the basal friction and finally stops at around \(t/\tau_{c}=4.0\) for both MPM and GNS rollout (prediction). As seen in fig. 7, although the GNS has only seen an aspect ratio \(a=1.0\) during training, GNS successfully captures the overall time-evolution of granular flows for a short column (\(a=0.8\)).
In addition to the visual comparison of profiles, we quantitatively investigate the flow dynamics of the GNS rollout of the short column by comparing the normalized runout and height evolution with the MPM. Figure 8a shows the evolution of normalized runout and height with time. The normalized runout of the MPM (see the gray line corresponding to the left axis in fig. 8a) shows the three stages of collapse. The collapse of the granular column starts with the failure of the flank and evolves slowly until the runout is fully mobilized by \(t/\tau_{c}=1.0\). As the collapse proceeds, the runout acceleration increases (\(t/\tau_{c}=1.0\) to \(2.5\)). After this time, the runout deaccelerates due to basal friction, and finally stops at \(t/\tau_{c}\approx 4.0\). Both GNS and MPM show a constant normalized height (see the gray line corresponding to the right axis in fig. 8a) as only the flank of the column collapse, leaving a static truncated-conical core. GNS predicts an almost identical evolution of runout as the MPM simulation, which is noteworthy as only a small portion of the training trajectories (\(5\) out of \(26\)) includes the deacceleration behavior leading to the flow coming to rest due to the basal friction before hitting the walls. Overall, the quantitative comparison shown in fig. 8a confirms that the GNS can accurately model the granular flow dynamics for the short column.
Figure 8b shows the energy evolutions from GNS rollout and MPM simulation. Based on the principle of energy conservation, the granular flow must satisfy \(E_{0}=E_{p}+E_{k}+E_{d}\)
Figure 6: Normalized runout distance (\(\left(L_{f}-L_{0}\right)/L_{0}\)) with different aspect ratios (\(a\)).
Figure 7: Evolution of flow with normalized time for GNS and MPM for the short column with \(a=0.8\). Units are in \(m\). The color represents the magnitude of the displacement. Subfigure (e) shows the final deposit at the last timestep.
where \(E_{0}\) is the potential energy of the column before material points start to mobilize, \(E_{p}\) is the potential energy, \(E_{k}\) is the kinetic energy, and \(E_{d}\) is the dissipation energy due to friction along the boundary and material. In fig. 7(b), both GNS rollout and MPM simulation show identical energy evolutions. A significant fraction of the stored potential energy is converted to kinetic energy in the initial stages of the failure, reaching a peak value of kinetic energy at \(t/\tau_{c}=1\). The kinetic energy dissipates due to the basal friction and flow ceases at \(t/\tau_{c}=4.0\) when \(E_{k}\) is fully dissipated.
#### 4.1.2 Tall column
Tall columns exhibit different runout dynamics than the short column. GNS was only trained on granular mass with an aspect ratio of 1.0 and has not seen the dynamics of a tall column during training. As an example, we demonstrate the GNS prediction for a tall column with \(a=2.0\). Figure 9 shows the GNS rollout and MPM simulation of the runout evolution for this case. GNS rollout predicts an identical runout profile with normalized time as the MPM simulation. Similar to the short column, the tall column also shows the three stages: the initial mobilization of the flow (\(t/\tau_{c}\) to 1.0), runout (\(t/\tau_{c}=1.0\) to 2.5) along the failure surface, deacceleration (\(t/\tau_{c}=2.5\) to 4.0). In the tall column, however, a larger volume of sliding mass above the failure plane is mobilized. During the initial stages of the collapse, the granular mass experiences free fall due to gravity dominated by collisional dissipation. As the granular mass reaches the failure surface, the vertical kinetic energy is converted to horizontal acceleration, resulting in longer runout distances. GNS rollout shows similar behavior to the MPM runout simulation.
Figure 9(a) shows the normalized runout and height evolution for the tall column. Although the runout evolution remains identical in the initial phase of the collapse, MPM
Figure 8: (a) Normalized runout and height evolution with normalized time and (b) normalized energy evolution with normalized time for the short column \(a=0.8\). \(H_{t}\) is the height from the bottom corner of the boundary to the highest part of the column at \(t\). \(E_{p}=\sum_{i=1}^{n}m_{i}gh_{i}\) is the potential energy of the column, and \(E_{k}=\frac{1}{2}\sum_{i}^{n}m_{i}v_{i}^{2}\) is the kinetic energy of the column, where \(m_{i}\), \(h_{i}\), and \(v_{i}\) is the mass, height, and velocity of the material point \(i\), and \(n\) is the total number of material points. \(E_{d}=E_{0}-E_{p}-E_{k}\) is the dissipation energy where \(E_{0}\) is the potential energy before material points start to move.
Figure 9: Evolution of flow with normalized time for GNS and MPM for the tall column with \(a=2.0\). Units are in \(m\). The color represents the magnitude of the displacement. Subfigure (e) shows the final deposit at the last timestep.
shows a slightly larger normalized runout compared to the GNS. The final height in both GNS and MPM remains the same.
Figure 10 presents the normalized energy evolution of the GNS rollout and the MPM simulation. During the initial stages of the collapse (\(t/\tau_{c}\) to \(1.0\)), a large amount of initial potential energy is converted to kinetic energy due to the free fall of mass under gravity. Both GNS and MPM show almost identical energy profiles. GNS shows a larger potential energy loss as the flow accelerates with an almost similar gain in kinetic energy. It indicates that GNS predicts larger frictional dissipation in tall columns, which could be from the training data focused only on short columns having higher frictional dissipation than tall columns. At the final stage, MPM shows less dissipation due to the basal boundary friction, resulting in a slightly longer runout than GNS rollout. Generally, energy dissipation behavior in GNS is consistent with MPM showing a more significant potential drop and increased dissipation energy accumulation.
Overall, the GNS rollout is consistent with the MPM simulation with a runout error of \(5.7\) % for the tall column with \(a=2.0\), implying that the GNS can capture the dynamics of granular flows in collision-dominated tall columns despite only being trained on \(a=1.0\).
#### 4.1.3 Upscaled domain
GNS is generalizable to different initial configurations of the flow simulation owing to the strong inductive bias of the GNN(Battaglia et al., 2018). The strengths of GNS surrogate models would be to train them on small-scale experiments and then predict large-scale dynamic scenarios with complex boundary conditions. We now evaluate the scalability of GNS to a larger domain, including more material points than the training dataset. Figure 11 shows the GNS rollout of a short column \(a=0.8\) with \(5120\) material points (up to \(5\times\) more material points than the training dataset) for a larger simulation domain and longer rollout duration than the training dataset.
GNS successfully predicts the flow dynamics for an upscaled domain size showing a similar runout profile with the MPM simulation. The GNS rollout predicts a normalized runout of
Figure 10: (a) Normalized runout and height evolution with normalized time and (b) normalized energy evolution with normalized time for the tall column with \(a=2.0\).
1.74 while the MPM simulation shows 1.76, showing an error of 1.30%. Figure 12 shows that GNS rollout successfully replicate energy evolution observed in an upscaled domain compared to the MPM simulation. Hence, GNS can reproduce the flow dynamics even for the upscaled geometries beyond the training dataset.
The primary source of GNS rollout error is not from the simulation scale but from the portion of material points that shows a large amount of displacement during column collapse. Figure 13 shows the evolution of mean squared error (MSE) of displacement over all material points (\(N\)) with time \(t\) computed as \(\frac{1}{n}\sum_{i}^{N}\left(\boldsymbol{p}_{i}^{t}-\boldsymbol{p}_{MPMi}^{t} \right)^{2}\), where \(\boldsymbol{p}_{MPMi}^{t}\) is material point position from MPM. When we compare the MSE for \(a=0.8\) with 1,824 of material points and its upscaled domain (2.22x material points), upscaling does not alter the MSE significantly. Figure 14 shows the evolution of the squared error of displacement of individual material points for the upscaled domain (\(a=0.8\)). The squared error shows larger values for those material points which run out further, i.e., the larger final displacements, but the proportion of the error with respect to the final runout is small so that GNS could simulate the upscaled case without significant error.
Figure 11: Evolution of flow with normalized time for GNS and MPM for the upscaled case of short column with \(a=0.8\). Units are in \(m\). The color represents the magnitude of the displacement. Subfigure (e) shows the final deposit at the last timestep.
Figure 12: Normalized runout and height evolution with normalized time and (b) normalized energy evolution with normalized time for the upscaled case of the short column with \(a=0.8\). Note that the data after \(t/\tau_{c}>5.0\) is abbreviated since the flow reaches a static state.
Figure 13: Evolution of mean squared displacement error over all material points with time.
## 6 Limitations
The GPU memory limits the current implementation of the GNS surrogate model. A GPU with 40 GB memory can simulate up to around 50K material points (approximately 3M edge connections). However, this shortcoming can be improved by optimizing the size of the connectivity radius \(R\). We use \(R\) of 0.030 m, which includes more interaction between neighbors. Multi-GPU GNS rollouts will enable the scalability of GNS to larger and more complex domains.
## 7 Conclusion
Traditional numerical methods are computationally intensive when simulating large-scale granular flows. Statistical or conventional machine learning-based surrogate models are not generalizable since they do not explicitly consider the underlying physics. In this study, we develop a graph neural network (GNN)-based simulator (GNS) as a generalizable granular flow surrogate simulator. The use of graphs efficiently represents the physical state of interacting material points. At the same time, the message passing operation of GNN encourages the neural network to learn the interaction between material points. The expressive power of graphs and message passing that models the interaction between material points allows GNS to accurately predict granular flow dynamics for various conditions, including those not seen during training. We demonstrate the performance of GNS on granular column collapse. GNS precisely simulates different flow dynamics involved in columns for different initial aspect ratios and can also be applied to the upscaled domain with more than 2 to \(5\times\) material points with a longer simulation duration than the data provided for training. GNS also shows a remarkable speed-up of \(150\times\) computation speed compared to the parallelized CPU version of MPM. The computational efficiency and generalizability of the GNS surrogate can expedite evaluating runout hazards requiring numerous scenarios.
Figure 14: Evolution of squared displacement error for each material point with normalized time in upscaled case of \(a=0.8\). The line color represents the final displacement of each material point. | 粒状流動ダイナミクスを正確なシミュレーションすることは、土木工学リスク評価において非常に重要です。特に、滑落やデブリ流動などのリスクを評価するために不可欠です。粒状流動は、粒子の動的な再配置、固体状から流体状の応答の変化を伴います。従来の連続型と離散型数値計算方法は、大規模なシステムをシミュレートする際に計算コストが制限されていることが多いです。統計的または機械学習に基づいたモデルは、代替手段となっています。しかし、彼らは主に実証的であり、限定的なパラメータセットに基づいています。従来の機械学習に基づいたモデルは、学習の依存性により、巨大なトレーニングデータが必要です。これらの問題を解決するために、私たちは、動的変化する粒状流動の状態を表現するグラフニューラルネットワークを使用しています。グラフニューラルネットワークは、最新の機械学習アーキテク |
2307.13203 | Sensor selection for fine-grained behavior verification that respects
privacy (extended version) | A useful capability is that of classifying some agent's behavior using data
from a sequence, or trace, of sensor measurements. The sensor selection problem
involves choosing a subset of available sensors to ensure that, when generated,
observation traces will contain enough information to determine whether the
agent's activities match some pattern. In generalizing prior work, this paper
studies a formulation in which multiple behavioral itineraries may be supplied,
with sensors selected to distinguish between behaviors. This allows one to pose
fine-grained questions, e.g., to position the agent's activity on a spectrum.
In addition, with multiple itineraries, one can also ask about choices of
sensors where some behavior is always plausibly concealed by (or mistaken for)
another. Using sensor ambiguity to limit the acquisition of knowledge is a
strong privacy guarantee, a form of guarantee which some earlier work examined
under formulations distinct from our inter-itinerary conflation approach. By
concretely formulating privacy requirements for sensor selection, this paper
connects both lines of work in a novel fashion: privacy-where there is a bound
from above, and behavior verification-where sensors choices are bounded from
below. We examine the worst-case computational complexity that results from
both types of bounds, proving that upper bounds are more challenging under
standard computational complexity assumptions. The problem is intractable in
general, but we introduce an approach to solving this problem that can exploit
interrelationships between constraints, and identify opportunities for
optimizations. Case studies are presented to demonstrate the usefulness and
scalability of our proposed solution, and to assess the impact of the
optimizations. | Rishi Phatak, Dylan A. Shell | 2023-07-25T02:00:07 | http://arxiv.org/abs/2307.13203v2 | # Sensor selection for fine-grained behavior verification that respects privacy (extended version)
###### Abstract
A useful capability is that of classifying some agent's behavior using data from a sequence, or trace, of sensor measurements. The sensor selection problem involves choosing a subset of available sensors to ensure that, when generated, observation traces will contain enough information to determine whether the agent's activities match some pattern. In generalizing prior work, this paper studies a formulation in which multiple behavioral itineraries may be supplied, with sensors selected to distinguish between behaviors. This allows one to pose fine-grained questions, e.g., to position the agent's activity on a spectrum. In addition, with multiple itineraries, one can also ask about choices of sensors where some behavior is always plausibly concealed by (or mistaken for) another. Using sensor ambiguity to limit the acquisition of knowledge is a strong privacy guarantee, a form of guarantee which some earlier work examined under formulations distinct from our inter-itinerary conflation approach. By concretely formulating privacy requirements for sensor selection, this paper connects both lines of work in a novel fashion: privacy--where there is a bound from above, and behavior verification--where sensors choices are bounded from below. We examine the worst-case computational complexity that results from both types of bounds, proving that upper bounds are more challenging under standard computational complexity assumptions. The problem is intractable in general, but we introduce an approach to solving this problem that can exploit interrelationships between constraints, and identify opportunities for optimizations. Case studies are presented to demonstrate the usefulness and scalability of our proposed solution, and to assess the impact of the optimizations.
## I Introduction
The problems of activity recognition [24], surveillance [14, 19, 23], suspicious and/or anomalous behavior detection [15], fault diagnosis [2, 17], and task monitoring [20] -- despite applying to distinct scenarios-- all involve the challenge of analyzing behavior on the basis of streams of observations from sensors. Sensor selection and activation problems (as studied by [2, 13, 21, 22]) are concerned with selecting a set of sensors to provide _sufficient_ information to reach conclusions that are both unequivocal and correct. Yet, _too much_ information may be detrimental -- for instance, in elder care and independent living applications (cf. [19]), capturing or divulging sensitive/inappropriate information could calamitous enough to be considered a showstopper.
As a concrete motivating example, consider the house shown in Figure 1. Suppose that it is to be turned, via automation, into a'smart home' to serve as an assisted living space for an elderly person named Myra. Assume that occupancy sensors triggered by physical presence can be placed in each labelled, contiguous area. We might program a system that uses such sensors to track important properties related to Myra's wellness and health goals so that a carer can be notified if something is amiss. For instance, suppose that to help fond off dementia, Myra does a post-lunch crossword in her study. To determine that Myra has moved through the house and ended up in the study doing her crossword, a single occupancy sensor, STUDY, suffices. Unfortunately, when the pool has just been cleaned, the chlorine negatively affects Myra's sinuses. To ensure that she ends up in the study _and_ never visits the swimming pool, we need 2 sensors (study, pool). The increase makes intuitive sense: we are, after all, now asking for more information about the activity than before. Notice the 3 kinds of behavior that we can now discriminate between: ones that are both safe and desirable (never visiting the pool and ending in the study), ones that are safe but undesirable (never visiting the pool, but also not ending in the study), and ones that are not safe (visiting the chlorinated pool).
Dinner time is next. We wish to have enough sensing power to tell that Myra has ended up the lounge/dining area, having spent some time in the kitchen. A pair of sensors (kitchen, LOUonge/DING) will do; and to include the study and pool, these are in addition to the previous 2, giving 4 in total. But alas, now Myra is annoyed: very occasionally, she enjoys a perfectly innocent midnight snack and she feels that any sensor that discloses when she has raided the fridge (and even the frequency of such forays!) is too invasive.1 She requires that we guarantee that those evenings in which her bedroom is occupied continuously shall appear identical to those in which one (or more) incursions have been made into the kitchen.
Footnote 1: Her concern is not misplaced, given the increasing number of attacks on cloud services in recent years [3] from which stored data may be leaked.
Fig. 1: Myra’s assistive living space wherein occupancy detectors can be employed within contiguous areas, corresponding here to eight regions: the pool, STUDY, BODGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BAND, BANDGO, BANDGO, BANDGO, BANDGO, BAND, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BAND, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BAND, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO, BANDGO
Her request, along with the previous requirements, can be met with 5 sensors (lounge/duning, study, backvard, front yard, pool). Though simplistic, this example illustrates an important idea -- it is not enough to reduce the number of sensors to increase privacy, but that sometimes it may be necessary to activate a different and higher cardinality combination of sensors to protect sensitive information.
The present paper re-visits the sensor selection model introduced in the IROS'21 paper of Rahmani et al. [13], advancing and elaborating upon it in order to treat the sort of problem just described. In that paper, the authors consider the setting where a claimant asserts that (future) movements within an environment will adhere to a given itinerary. Then the objective is to select, from some set of sensors at specific locations, a small subset that will detect any deviations from this claim. One of present paper's key advances is the ability to constrain the information obtained from sensors, in order to meet privacy and non-disclosure requirements. Further, the present paper generalizes the problem so that multiple itineraries are considered and, consequently, the objective becomes rather more subtle. In the prior work, the problem is to select sensors that single out the claimed itinerary from all other activity; now, when closely-related itineraries are provided, the sensors selected must have adequate resolving power distinguish fine-grain differentiations (recall the 3 kinds of behavior above).
This paper establishes the computational hardness of sensor selection and optimization under this richer setting (see Section V) giving a nuanced description of its relation to the constraints introduced to modulate the collected information. Then, although the problem is worst-case intractable in general, we introduce an exact method in Section VI which treats the sensor selection problem using automata theoretic tools (an approach quite distinct from the ILP of [13]). Multiple itineraries are provided as input and their interrelationships express constraints -- we examine opportunities to exploit aspects of this structure, which leads us to propose some optimizations. The empirical results we present in Section VII show that the improvements obtained from the optimizations are significant, and demonstrate how they help improve the scalability of our proposed solution.
## II Related works
So far, no single model for robotic privacy has yet emerged. A useful taxonomy dealing with privacy for robots (and associated intelligent systems) appears in [16]. Perhaps most visible candidate is that of differential privacy, used by such works as [4, 12]. There, the underlying formulation builds upon a notion of nearness (originally with a static database of multiple records), and is a less natural fit to treat the problem of altering the processes by which data are acquired. The present work tackles how privacy (of even a single entity) may be preserved without any need for addition of noise if they can exert some degree of control on the tools used to collected that data.
The idea of obscuring or concealing information is another candidate and is prevalent in the control community's notion of opacity: an excellent overview for Discrete Event Systems (DES) is by Jacob, Lesage, and Faure [6]. A DES is said to be opaque if a secret has some level of indistinguishability, a concept very close to the conflation constraints we define in Section III. For further reading in the role of opacity in DES, the reader is referred to [25, 8] and [7].
Previous work by Masopust and Yin affirms that the properties of detectability and opacity are worst case intractable in general [9]. In particular, Cassez et. al. [1] showed that determining the opacity of static and dynamic masks was \(\mathsf{PSPACE}\)-Complete via formulation of so-called'state-based' and 'trace-based' opacities. In our work, importantly, simply obfuscating states is not enough, as how that particular state was reached also plays a role. A second factor which differentiates our work is that we allow specifications of constraints between two specified behaviors, instead of making them binary, one-versus-all decisions. An important subtlety, moreover, is that the conflation constraints are directed (cf., also [11]), implying that a more fine grained designation of obfuscation is allowed without necessarily running in both directions. Thus, we find it more suitable to reduce directly from the inclusion problem than universality.
## III Problem statement and definitions
The environment in which some agent of interest moves is modelled as a discrete structure called the _world graph_:
**Definition 1** (World Graph [13]).: A world graph is an edge-labelled, directed multigraph \(\mathcal{G}=(V,E,\mathrm{src},\mathrm{tgt},v_{0},S,\mathbb{Y},\lambda)\):
* \(V\) is a non-empty vertex set,
* \(E\) is a set of edges,
* \(\mathrm{src}:E\to V\) and \(\mathrm{tgt}:E\to V\) are source and target functions, respectively, identifying a source vertex and target vertex for each edge,
* \(v_{0}\in V\) is an initial vertex,
* \(S=\{s_{1},s_{2},\ldots,s_{k}\}\) is a nonempty finite set of sensors,
* \(\mathbb{Y}=\{Y_{s_{1}},Y_{s_{2}},\ldots,Y_{s_{k}}\}\) is a collection of mutually disjoint event sets associated to each sensor, and
* \(\lambda:E\rightarrow\not\!\!S(Y_{s_{1}}\cup Y_{s_{2}}\cup\cdots\cup Y_{s_{k}})\) is a labelling function, which assigns to each edge, a world-observation a set of events.
(Here \(\not\!\!S(X)\), the powerset, denotes all the subsets of \(X\).)
The usefulness of the world graph is that it governs two major aspects of the agent's locomotion: how it may move, and what would happen if it moved in a certain way. The agent is known to start its movements at \(v_{0}\) and take connected edges.
However, the agent cannot make any transitions that are not permitted by the world graph. Myra, for example, cannot jump from the bedroom to the lounge/duning without first going through the kitchhen. Thus, the collection of all paths that can physically be taken by the agent is defined as follows:
**Definition 2** (Walks [13]).: A string \(e_{1}e_{2}\ldots e_{n}\in E^{*}\) is a walk on the world graph if and only if \(\mathrm{src}(e_{1})=v_{0}\) and for all \(i\in\{1,\ldots,n-1\}\) we have that \(\mathrm{tgt}(e_{i})=\mathrm{src}(e_{i+1})\). The set of all walks over \(\mathcal{G}\) is denoted \(\mathrm{Walks}(\mathcal{G})\)
Next, we seek to understand what role the sensors play when an agent interacts with the world. Whenever an edge is crossed, it causes a'sensor response' described by the label on that edge: those sensors which are associated with the sensor values in the label (and are turned on/selected) will emit those values. Returning to the home in Figure 1, assume there are sensors in the bedroom and study which measure occupancy. Then, when Myra starts in the bedroom and moves to the study, we would obtain the event \(\{\texttt{bedroom}^{-},\texttt{stUDY}^{+}\}\) for the transition, with the plus superscript representing an event triggered by detection, and minus the inverse. The model also allows sensors other than those which detect occupancy (e.g., non-directed traversals via break beams), see [13] too.
To understand the sensor values generated when crossing a single edge where sensors may be turned off, we use a sensor labelling function:
**Definition 3** (Sensor labelling function).: Let \(\mathcal{G}=(V,E,\mathrm{src},\mathrm{tgt},v_{0},S,\mathbb{Y},\lambda)\) be a world graph, and \(M\subseteq S\) a sensor selection from it. For selection \(M\), the set of all events that could be produced by those sensors will be denoted \(\mathbf{Y}(M)=\bigcup_{s\in M}Y_{s}\). Then the _sensor labelling function_ is for all \(e\in E\):
\[\lambda_{M}(e)=\begin{cases}\lambda(e)\cap\mathbf{Y}(M)&\text{if }\lambda(e) \cap\mathbf{Y}(M)\neq\varnothing,\\ \epsilon&\text{otherwise.}\end{cases}\]
(Note that \(\epsilon\) here is the standard empty symbol.) Later in the paper, Figure 2 forms an example of an environment with a world graph whose edges bear appropriate sensor labels.
Now, we may formally define the signature function for a walk and a given sensor set as follows:
**Definition 4** (Signature of a walk [13]).: For a world graph \(\mathcal{G}=(V,E,\mathrm{src},\mathrm{tgt},v_{0},S,\mathbb{Y},\lambda)\) we define function \(\beta_{\mathcal{G}}:\mathrm{Walks}(\mathcal{G})\times\mathcal{G}(S)\to( \mathcal{G}(\mathbf{Y}(S))\setminus\{\varnothing\})^{*}\) such that for each \(r=e_{1}e_{2}\ldots e_{n}\in\mathrm{Walks}(\mathcal{G})\) and \(M\subseteq S\), \(\beta_{\mathcal{G}}(r,M)=z_{1}z_{2}\ldots z_{n}\) in which for each \(i\in\{1,\ldots,n\}\), we have that \(z_{i}=\lambda_{M}(e_{i})\).
The behavior of the agent will be specified with respect to a given world graph and these specifications will describe sequences of edges the agent may decide to take in the world graph. Following the convention of [13], each is called an itinerary. Subsequent definitions will involve the use of multiple itineraries in order to constrain what information about the agent's behavior the sensors are allowed to obtain.
**Definition 5** (Itinerary DFA [13]).: An itinerary DFA over a world graph \(\mathcal{G}=(V,E,\mathrm{src},\mathrm{tgt},v_{0},S,\mathbb{Y},\lambda)\) is a DFA \(\mathcal{I}=(Q,E,\delta,q_{0},F)\) in which \(Q\) is a finite set of states; \(E\) is the alphabet; \(\delta:Q\times E\to Q\) is the transition function; \(q_{0}\) is the initial state; and \(F\) is the set of accepting (final) states.
With the basic elements given, the next four definitions formalize the different classes of constraints we desire a set of sensors to satisfy. Conflation constraints allow one type of behavior to 'appear' similar to another, while discrimination constraints specify that two behaviors must be distinguishable.
**Definition 6** (Conflation constraint).: A conflation constraint on a world graph \(\mathcal{G}\) is an ordered pair of itineraries \((\mathcal{I}_{a},\mathcal{I}_{b})^{\boxdot}\).
**Definition 7** (Discrimination constraint).: A discrimination constraint on a world graph \(\mathcal{G}\) is an unordered pair of itineraries \([\mathcal{I}_{1},\mathcal{I}_{2}]^{\boxdot}\).
Both types will combined within a graph:
**Definition 8** (Discriment designation).: A _discernment designation_ is a mixed graph \(\mathcal{D}=(I,I_{D},I_{C})\), with vertices \(I\) being a collection of itineraries, along with undirected edges \(I_{D}\) which are a set of discrimination constraints, and arcs (directed edges) \(I_{C}\) which are a set of conflation constraints.
And, finally, we can state what a satisfying selection entails:
**Definition 9** (Satisfying sensor selection).: Given some discernment designation \(\mathcal{D}\), a sensor set \(M\subseteq S\) is a _satisfying sensor selection for \(\mathcal{D}=(I,I_{D},I_{C})\)_ if and only if both of the following conditions hold:
* For each \([\mathcal{I}_{1},\mathcal{I}_{2}]^{\boxdot}\in I_{D}\) we have that there exist no \(w_{1}\in\mathrm{Walks}(\mathcal{G})\cap\mathcal{L}(\mathcal{I}_{1})\) and \(w_{2}\in\mathrm{Walks}(\mathcal{G})\cap\mathcal{L}(\mathcal{I}_{2})\) where \(\beta_{\mathcal{G}}(w_{1},M)=\beta_{\mathcal{G}}(w_{2},M)\).
* For each \((\mathcal{I}_{a},\mathcal{I}_{b})^{\boxdot}\in I_{C}\) we have that for every \(w\in\mathrm{Walks}(\mathcal{G})\cap\mathcal{L}(\mathcal{I}_{a})\), there exists \(c_{w}\in\mathrm{Walks}(\mathcal{G})\cap\mathcal{L}(\mathcal{I}_{b})\) where \(\beta_{\mathcal{G}}(w,M)=\beta_{\mathcal{G}}(c_{w},M)\).
In the above definition, the '\(\boxdot\)' constraints correspond to _discrimination_ requirements, while '\(\boxdot\)' require _conflation_. The importance of the set intersections is that the only things that can really happen are walks on the world graph. When there is a discrimination constraint, there are no walks from the one itinerary that can be confused with one from the other itinerary. When there is a conflation constraint, any walk from the first itinerary has at least one from the second that appears identical. Conflation models privacy in the following sense: any putative claim that the agent followed one itinerary can be countered by arguing, just as plausibility on the basis of the sensor readings, that it followed the other itinerary. While the discrimination constraint is symmetric, the second need not be. (Imagine: \(\{\beta_{\mathcal{G}}(w,M)|w\in\mathrm{Walks}(\mathcal{G})\cap\mathcal{L}( \mathcal{I}_{1})\}=\{a,b,c,d\}\) while \(\{\beta_{\mathcal{G}}(w^{\prime},M)|w^{\prime}\in\mathrm{Walks}(\mathcal{G}) \cap\mathcal{L}(\mathcal{I}_{2})\}=\{a,b,c,d,e\}\). Then \((\mathcal{I}_{1},\mathcal{I}_{2})^{\boxdot}\) is possible, while \((\mathcal{I}_{2},\mathcal{I}_{1})^{\boxdot}\) is not.)
Now, we are ready to give the central problem of the paper:
**Decision Problem:**: **Minimal sensor selection to accommodate a discernment designation in itineraries (MSSADDI)**
* A world graph \(\mathcal{G}\), a discernment designation \(\mathcal{D}\), and a natural number \(k\in\mathbb{N}\).
* A satisfying sensor selection \(M\subseteq S\) for \(\mathcal{D}\) on \(\mathcal{G}\) with \(|M|\leq k\), or 'Infeasible' if none exist.
## IV Signature Automata
To understand how we may begin solving MSSADDI and what its theoretical complexity is, we introduce the concept of a signature automaton. Signature automata are produced from the product automata of an itinerary with the world graph:
**Definition 10** (Product automaton [13]).: Let \(\mathcal{G}=(V,E,\mathrm{src},\mathrm{tgt},v_{0},S,\mathbb{Y},\lambda)\) be a world graph and \(\mathcal{I}=(Q,E,\delta,q_{0},F)\) be an itinerary DFA. The product \(\mathcal{P}_{\mathcal{G},\mathcal{I}}\) is a partial DFA \(\mathcal{P}_{\mathcal{G},\mathcal{I}}=(Q_{\mathcal{P}},E,\delta_{\mathcal{P}}, q_{0}^{\mathcal{P}},F_{\mathcal{P}})\) with
* \(Q_{\mathcal{P}}=Q\times V\),
* \(\delta_{\mathcal{P}}:Q_{\mathcal{P}}\times E\to Q_{\mathcal{P}}\cup\{\bot\}\) is a function such that for each \((q,v)\in Q_{\mathcal{P}}\) and \(e\in E\), \(\delta_{\mathcal{P}}((q,v),e)\) is defined to be \(\bot\) if \(\mathrm{src}(e)\neq v\), otherwise, \(\delta_{\mathcal{P}}((q,v),e)=(\delta(q,e),\mathrm{tgt}(e))\),
* \(q_{0}^{\mathcal{P}}=(q_{0},v_{0})\), and
* \(F_{\mathcal{P}}=F\times V\).
The language of this product automaton, as a DFA, is the collection of (finite-length) sequences from \(E\) that can be traced starting at \(q_{0}^{\mathcal{P}}\), never producing a \(\bot\), and which arrive at some element in \(F_{\mathcal{P}}\). The language recognized is the set of walks that are within the itinerary \(\mathcal{I}\), i.e., \(\mathcal{L}(\mathcal{P}_{\mathcal{G},\mathcal{I}})=\mathrm{Walks}(\mathcal{G })\cap\mathcal{L}(\mathcal{I})\).
**Definition 11** (Signature automaton).: Let \(\mathcal{G}=(V,E,\mathrm{src},\mathrm{tgt},v_{0},S,\mathbb{Y},\lambda)\) be a world graph, let \(M\subseteq S\) be a sensor selection on it, \(\mathcal{I}=(Q,E,\delta,q_{0},F)\) be an itinerary DFA, and \(\mathcal{P}_{\mathcal{G},\mathcal{I}}\) be their product. A signature automaton \(\mathcal{S}_{\mathcal{G},\mathcal{I},M}=(Q_{\mathcal{P}},\Sigma,\delta_{ \mathcal{S}},q_{0}^{\mathcal{P}},F_{\mathcal{P}})\) is a nondeterministic finite automaton with \(\epsilon\)-moves (NFA-\(\epsilon\)) with
* \(\Sigma=\{\lambda_{M}(e)\mid e\in E,\lambda_{M}(e)\neq\epsilon\}\)
* \(\delta_{S}:Q_{\mathcal{P}}\times\Sigma\cup\{\epsilon\}\to\not\in Q(Q_{ \mathcal{P}})\) is a function defined for each \((q,v)\in Q_{\mathcal{P}}\) and \(\sigma\in\Sigma\cup\{\epsilon\}\) such that \[\delta_{\mathcal{S}}\big{(}(q,v),\sigma\big{)}=\Big{\{}\delta_{ \mathcal{P}}\big{(}(q,v),e\big{)}\;\Big{|}e\in E,\] \[\delta_{\mathcal{P}}\big{(}(q,v),e\big{)}\neq\perp,\lambda_{M}(e )=\sigma\Big{\}}.\]
The signature automaton produces all the signatures that could result from following a path in the world graph conforming to the given itinerary. Formally, we have the following:
**Lemma 1**.: _For world graph \(\mathcal{G}=(V,E,\mathrm{src},\mathrm{tgt},v_{0},S,\mathbb{Y},\lambda)\), sensor selection \(M\subseteq S\), and itinerary \(\mathcal{I}=(Q,E,\delta,q_{0},F)\), if their signature automaton is \(\mathcal{S}_{\mathcal{G},\mathcal{I},M}\), then:_
\[\mathcal{L}(\mathcal{S}_{\mathcal{G},\mathcal{I},M})=\{\beta_{\mathcal{G}}(w, M)\mid w\in\mathrm{Walks}(\mathcal{G})\cap\mathcal{L}(\mathcal{I})\}\,.\]
Proof.: For all \(w\in\mathrm{Walks}(\mathcal{G})\cap\mathcal{L}(\mathcal{I})\) there is a unique sequence of states \(q_{0}^{\mathcal{P}},q_{1}^{\mathcal{P}},\ldots,q_{n}^{\mathcal{P}}\) in \(\mathcal{P}_{\mathcal{G},\mathcal{I}}\) such that \(q_{n}^{\mathcal{P}}\in F_{\mathcal{P}}\). Following that sequence through the signature automaton returns the signature \(\beta_{\mathcal{G}}(w,M)\). Similarly, any string that is accepted by \(\mathcal{S}_{\mathcal{G},\mathcal{I},M}\) has a sequence of states \(q_{0}^{\mathcal{P}},q_{1}^{\mathcal{P}},\ldots,q_{n}^{\mathcal{P}}\) in \(\mathcal{S}_{\mathcal{G},\mathcal{I},M}\) such that \(q_{n}^{\mathcal{P}}\in F_{\mathcal{P}}\). Following those states through \(\mathcal{P}_{\mathcal{G},\mathcal{I}}\) returns the walk conforming to the itinerary which produced it.
Note that the manner in which the signature automaton was produced was simply to replace the alphabet \(E\) of the product automaton with the alphabet \(\Sigma\). This introduces nondeterminism in the automaton because two outgoing edges from a vertex in the world graph may produce the same (non-empty) sensor values. Moreover, certain transitions may be made on the empty symbol if no sensor values are produced upon taking an edge in the world graph too.
The usefulness of the preceding structures becomes clearer from the lemmas that follow.
**Lemma 2**.: _Given world graph \(\mathcal{G}=(V,E,\mathrm{src},\mathrm{tgt},v_{0},S,\mathbb{Y},\lambda)\) and itinerary DFAs: \(\mathcal{I}^{1}=(Q^{1},E,\delta^{1},q_{0}^{1},F^{1})\) and \(\mathcal{I}^{2}=(Q^{2},E,\delta^{2},q_{0}^{2},F^{2})\), a subset of sensors \(M\subseteq S\) is a satisfying sensor selection for constraint discrimination of itineraries \(\mathcal{I}^{1}\) and \(\mathcal{I}^{2}\) if and only if \(\mathcal{L}(\mathcal{S}_{\mathcal{G},\mathcal{I}^{1},M})\cap\mathcal{L}( \mathcal{S}_{\mathcal{G},\mathcal{I}^{2},M})=\varnothing\)._
Proof.: Assume that \(M\) satisfies the constraint \([\mathcal{I}_{1},\mathcal{I}_{2}]^{\boxtimes}\). This implies that there exist no \(w_{1}\) and \(w_{2}\), with \(w_{1}\in\mathrm{Walks}(\mathcal{G})\cap\mathcal{L}(\mathcal{I}_{1})\) and \(w_{2}\in\mathrm{Walks}(\mathcal{G})\cap\mathcal{L}(\mathcal{I}_{2})\), where \(\beta_{\mathcal{G}}(w_{1},M)=\beta_{\mathcal{G}}(w_{2},M)\). The previous fact along with Lemma 1 implies \(\mathcal{L}(\mathcal{S}_{\mathcal{G},\mathcal{I}^{1},M})\cap\mathcal{L}( \mathcal{S}_{\mathcal{G},\mathcal{I}^{2},M})=\varnothing\). The other way: if such \(w_{1}\) and \(w_{2}\) can be found, then letting \(c=\beta_{\mathcal{G}}(w_{1},M)=\beta_{\mathcal{G}}(w_{2},M)\), we have that \(\{c\}\subseteq\mathcal{L}(\mathcal{S}_{\mathcal{G},\mathcal{I}^{1},M})\cap \mathcal{L}(\mathcal{S}_{\mathcal{G},\mathcal{I}^{2},M})\).
Notice that if \(\mathcal{L}(\mathcal{I}_{1})\cap\mathcal{L}(\mathcal{I}_{2})\neq\varnothing\) then any walks \(w_{1}=w_{2}\) taken from this intersection must have \(\beta_{\mathcal{G}}(w_{1},M)=\beta_{\mathcal{G}}(w_{2},M)\). Any two itineraries with overlapping languages, and whose overlap falls (partly) within the set of walks, will yield a sensor selection problem that must be infeasible when these itineraries are given as a discrimination constraints.
A similar lemma follows for the conflation constraints.
**Lemma 3**.: _Given world graph \(\mathcal{G}=(V,E,\mathrm{src},\mathrm{tgt},v_{0},S,\mathbb{Y},\lambda)\) and itinerary DFAs: \(\mathcal{I}^{1}=(Q^{1},E,\delta^{1},q_{0}^{1},F^{1})\) and \(\mathcal{I}^{2}=(Q^{2},E,\delta^{2},q_{0}^{2},F^{2})\), a subset of sensors \(M\subseteq S\) is a satisfying sensor selection for constraint conflation of itineraries \(\mathcal{I}^{1}\) and \(\mathcal{I}^{2}\) if and only if \(\mathcal{L}(\mathcal{S}_{\mathcal{G},\mathcal{I}^{1},M})\subseteq\mathcal{L}( \mathcal{S}_{\mathcal{G},\mathcal{I}^{2},M})\)._
Proof.: Assume that \(M\) satisfies the constraint \((\mathcal{I}_{1},\mathcal{I}_{2})^{\boxplus}\). This implies that every \(w\in\mathrm{Walks}(\mathcal{G})\cap\mathcal{L}(\mathcal{I}_{1})\) has a \(c_{w}\in\mathrm{Walks}(\mathcal{G})\cap\mathcal{L}(\mathcal{I}_{2})\) with \(\beta_{\mathcal{G}}(w,M)=\beta_{\mathcal{G}}(c_{w},M)\). The previous fact along with Lemma 1 implies \(\mathcal{L}(\mathcal{S}_{\mathcal{G},\mathcal{I}^{1},M})\subseteq\mathcal{L}( \mathcal{S}_{\mathcal{G},\mathcal{I}^{2},M})\). In the opposite direction, if there exists a \(w\) for which no \(c_{w}\) can be found, we know that \(\mathcal{L}(\mathcal{S}_{\mathcal{G},\mathcal{I}^{1},M})\nsubseteq\mathcal{L}( \mathcal{S}_{\mathcal{G},\mathcal{I}^{2},M})\) since \(\beta_{\mathcal{G}}(w,M)\in\mathcal{L}(\mathcal{S}_{\mathcal{G
NP-Complete (it essentially involves a single itinerary and its complement, one discrimination constraint, and zero conflation constraints), we naturally expect the problem to be NP-Hard. And this is indeed true (though the direct proof is straightforward and, hence, omitted). For the full problem, the question is whether the conflation constraints contribute additional extra complexity. The answer is in the affirmative, under standard computational complexity assumptions:
**Lemma 7**.: _MSSADDI is in \(\mathsf{PSPACE}\)._
Proof.: To show that MSSADDI is in \(\mathsf{PSPACE}\), we shall show that it is in \(\mathsf{NPSPACE}\), and through Lemma 4, this implies that it is also in \(\mathsf{PSPACE}\).
Given this fact, assume that we have 'guessed' a sensor selection \(M\subseteq S\) which, in polynomial space, must be verified to be a satisfying sensor selection. Thus, we must verify that each \([\mathcal{I}^{1},\mathcal{I}^{2}]^{\boxtimes}\in I_{D}\) and each \((\mathcal{I}^{1},\mathcal{I}^{2})^{\boxminus}\in I_{C}\) is satisfied by \(M\).
First, to show any \([\mathcal{I}^{1},\mathcal{I}^{2}]^{\boxtimes}\in I_{D}\) can be checked in polynomial time (and thus also polynomial space): construct \(\mathcal{S}_{\mathcal{G},\mathcal{I}^{1},M}\) and \(\mathcal{S}_{\mathcal{G},\mathcal{I}^{2},M}\). This simply replaces the alphabet in the product, which is of size \(|V||Q|\)). Then, determining whether \(\mathcal{L}(\mathcal{S}_{\mathcal{G},\mathcal{I}^{1},M})\cap\mathcal{L}( \mathcal{S}_{\mathcal{G},\mathcal{I}^{2},M})=\varnothing\) is simple (cf. Lemma 5).
Thus, the total amount of time taken to check the discrimination constraints can be upper bounded by \(O\left(\sum_{[\mathcal{I}^{1},\mathcal{I}^{2}]^{\boxtimes}\in I_{D}}\mathsf{ poly}(|V||Q_{\mathcal{I}^{1}}|,|V||Q_{\mathcal{I}^{2}}|)\right)\) which is polynomial in the input size.
Next, conflation constraints: follow a similar process to construct their signature automata \(\mathcal{S}_{\mathcal{G},\mathcal{I}^{1},M}\) and \(\mathcal{S}_{\mathcal{G},\mathcal{I}^{2},M}\), and ascertain whether \(\mathcal{L}(\mathcal{S}_{\mathcal{G},\mathcal{I}^{1},M})\subseteq\mathcal{L}( \mathcal{S}_{\mathcal{G},\mathcal{I}^{2},M})\). By Lemma 6, we know this problem is \(\mathsf{PSPACE}\)-Complete, thus, it can be determined using only a polynomial amount of space. Hence \(\mathsf{MSSADDI}\in\mathsf{NPSPACE}\implies\mathsf{MSSADDI}\in\mathsf{PSPACE}\).
Next, to show that MSSADDI is \(\mathsf{PSPACE}\)-Hard, we reduce from the NFA inclusion problem in Lemma 6. One can think of this intuitively as showing that conflation constraints, in solving the inclusion problem on signature automata, cover worst-case instances.
**Lemma 8**.: _MSSADDI is \(\mathsf{PSPACE}\)-Hard_
Proof.: We reduce from NFA Inclusion, known to be \(\mathsf{PSPACE}\)-Complete (Lemma 6). Given an NFA Inclusion Problem instance \(x=\langle\mathcal{A}=(Q_{A},\Sigma,\delta_{A},q_{0}^{A},F_{A}),\mathcal{B}=( Q_{B},\Sigma,\delta_{B},q_{0}^{B},F_{B})\rangle\) we form an instance of MSSADDI \(f(x)=\langle\mathcal{G}=(V,E,\operatorname{src},\operatorname{tgt},v_{0},S, \mathbb{Y},\lambda),\mathcal{D}=(I,I_{D},I_{C}),k\rangle\).
Every state of \(\mathcal{A}\) and \(\mathcal{B}\) will be assumed to reachable from their respective start states (unreachable states do not contribute to the NFA's language, and are easily trimmed). We construct \(\mathcal{G}\) as follows:--
1. Let the vertex set be \(V=\{v_{0}\}\cup Q_{A}\cup Q_{B}\) where \(v_{0}\) is a new vertex not in either \(Q_{A}\) or \(Q_{B}\).
2. Let the edge set be \(E=\{e_{A},e_{B}\}\cup\{e_{1},e_{2},\ldots,e_{n},\)\(e_{n+1},e_{n+2},\ldots,e_{n+m}\}\). Here \(e_{A}\) is an edge that connects \(v_{0}\) to \(q_{0}^{A}\) and \(e_{B}\) is an edge connecting \(v_{0}\) to \(q_{0}^{B}\). Assuming there are \(n\) transitions in \(\mathcal{A}\) of the form \(q_{j}^{A}\in\delta_{A}(q_{i}^{A},\sigma)\), we produce an edge \(e_{k}\) for some \(1\leq k\leq n\) which connects \(q_{i}^{A}\) to \(q_{j}^{A}\) for every such \(\sigma\). Similarly, if there are \(m\) transitions in \(\mathcal{B}\) of the form \(q_{j}^{B}\in\delta_{B}(q_{i}^{B},\sigma)\), we would have an edge \(e_{n+k}\) for some \(1\leq k\leq m\) connecting \(q_{i}^{B}\) to \(q_{j}^{B}\) for each \(\sigma\). The \(\operatorname{src}\) and \(\operatorname{tgt}\) functions are defined appropriately for all edges.
3. Let sensor set \(S=\{s_{1},s_{2},\ldots,s_{|\Sigma|}\}\) where each sensor produces exactly one event so that if \(\Sigma=\{\sigma_{1},\sigma_{2},\ldots,\sigma_{|\Sigma|}\}\) then \(Y_{s_{i}}=\{\sigma_{i}\}\) and \(\mathbb{Y}=\{Y_{s_{1}},Y_{s_{2}},\ldots,Y_{s_{|\Sigma|}}\}\).
4. The edge labelling function is defined as follows. First, let \(\lambda(e_{A})=\lambda(e_{B})=\varnothing\). Then, for each transition in \(\mathcal{A}\) of the form \(q_{j}^{A}\in\delta_{A}(q_{i}^{A},\sigma)\), if \(\sigma=\epsilon\), label that edge with \(\varnothing\), otherwise label it with the singleton set \(\{\sigma\}\) for all such \(\sigma\). Follow the same procedure again for \(\mathcal{B}\). Note that, by construction, a single sensor may cover an edge from both \(\mathcal{A}\) and \(\mathcal{B}\). This is natural as the given NFAs share the alphabet \(\Sigma\). Importantly: this does not violate the assumption that sensors have pairwise distinct readings. Turning some sensor on, means we receive its readings from both regions--that constructed from \(\mathcal{A}\)_and_\(\mathcal{B}\)--or, when turned off, from neither.
The following define \(\mathcal{D}\), the discernment designation:--
1. In the world graph \(\mathcal{G}\) constructed in the previous step, let there be \(p\leq n+m\) edges collected as \(\{e_{i_{1}},e_{i_{2}},\ldots,e_{i_{p}}\}\) where we have that each of them has a non-empty label, i.e., \(e_{i_{k}}\in E\), and \(\lambda(e_{i_{k}})\neq\varnothing\) for every \(1\leq k\leq p\). Then let the set of itineraries \(I\) be \(\{I_{e_{i_{1}}},I_{e_{i_{2}}},\ldots,I_{e_{i_{p}}}\}\cup\{I_{e_{i_{1}}^{+}},I_{e_{ i_{2}}^{+}},\ldots,I_{e_{i_{p}}^{+}}\}\cup\{I_{A},I_{B}\}\), where we will give the language accepted by each DFA. The first \(2p\) elements have a language with a single string: for \(1\leq k\leq p\), to determine the languages \(\mathcal{L}(I_{e_{i_{k}}})\) and \(\mathcal{L}(I_{e_{i_{k}}^{+}})\), run a breadth first search (BFS) from \(v_{0}\) on \(\mathcal{G}\). This co-routine will return the shortest path (consisting of specific edges) from \(v_{0}\) to \(\operatorname{src}(e_{i_{k}})\). This path is the only string accepted by \(I_{e_{i_{k}}}\), and the same path but with \(e_{i_{k}}\) appended is the only string accepted by \(I_{e_{i_{k}}^{+}}\). Next, itinerary DFA \(I_{A}\) is to be defined so it accepts a string \(e_{i_{1}}e_{i_{2}}\ldots e_{i_{r}}\) where \(e_{i_{k}}\in E\) for all \(1\leq k\leq r\) if and only if \(\operatorname{tgt}(e_{i_{r}})\in F_{A}\). Similarly, define DFA \(I_{B}\) so that it accepts a string \(e_{i_{1}}^{\prime}e_{i_{2}}^{\prime}\ldots e_{i_{r}^{\prime}}\) where \(e_{i_{k}}^{\prime}\in E\) for all \(1\leq k\leq q\) if and only if \(\operatorname{tgt}(e_{i_{r}}^{\prime})\in F_{B}\). Note that we are not asking for the given NFAs \(\mathcal{A}\) and \(\mathcal{B}\) to be converted to DFAs -- instead, we are simply constructing a DFA which recognizes that some _path_ of an accepting string arrives at an accepting state in the NFA. The construction of such a DFA is simple: For \(I_{A}\), define two states \(q_{0}\) and \(q_{1}\), with only \(q_{1}\) accepting. Then, define transitions from \(q_{0}\) to \(q_{1}\) and \(q_{1}\) to \(q_{1}\) for all \(e\in E\) such that \(\operatorname{tgt}(e)\) is a final state in \(\mathcal{A}\). Similarly, define transitions from \(q_{0}\) to \(q_{0}\) and \(q_{1}\) to \(q_{0}\) for all \(e\in E\) such that \(\operatorname{tgt}(e)\) is not a final state in \(\mathcal{A}\). Doing the same for \(\mathcal{B}\) gives \(I_{B}\).
2. Define \(I_{D}=\left\{[I_{e_{i_{1}}},I_{e_{i_{1}}^{+}}]^{\boxtimes},\ldots
Lastly, let \(k=|\Sigma|\).
This three-piece mapping is accomplished in polynomial time since the size of the world graph is \(O(1+|\mathcal{A}|+|\mathcal{B}|)\) and the size of \(\mathcal{D}\) (i.e., the number of constraints) is \(O(|\mathcal{A}|+|\mathcal{B}|)\).2 Since BFS runs in polynomial time on \(\mathcal{G}\), all the discrimination requirements need polynomial time to construct and each is of polynomial size. In other words, for each itinerary in a discrimination constraint, its singleton language is of polynomial length (since \(1\leq q<|V|\) if \(q\) is the length of the shortest path), thus the DFA used to construct it must also be of polynomial size. For the itineraries in the conflation constraints, the DFAs have 2 states and \(|E|\) transitions, which is polynomial in the size of \(\mathcal{A}\) and \(\mathcal{B}\).
Footnote 2: Here, \(|\cdot|\) gives the number of transitions or states, whichever is greater.
Finally, to prove correctness: there must be a satisfying sensor selection of size at most \(k\) if and only if \(\mathcal{L}(\mathcal{A})\subseteq\mathcal{L}(\mathcal{B})\).
(\(\implies\)) Assume that \(\mathcal{L}(\mathcal{A})\subseteq\mathcal{L}(\mathcal{B})\). Then the sensor selection \(M=S\) is a satisfying sensor selection because, firstly, \(|M|=|\Sigma|=k\). Secondly, note that all the discrimination constraints are satisfied because all the sensors are turned on. Lastly, the conflation constraint is also satisfied by reasoning as follows: any walk beginning at \(v_{0}\) first going to \(q_{0}^{4}\) and ending at some \(v\in F_{A}\) has a signature \(\{\sigma_{1}\}\{\sigma_{2}\}\ldots\{\sigma_{m}\}\) for which \(\sigma_{1}\sigma_{2}\ldots\sigma_{m}\in\mathcal{L}(\mathcal{A})\) which implies \(\sigma_{1}\sigma_{2}\ldots\sigma_{m}\in\mathcal{L}(\mathcal{B})\). But, by construction, one can take a path in the world graph, taking a first step from \(v_{0}\) to \(q_{0}^{B}\) without producing any sensor value, and then follow exactly the same path that is accepting in \(\mathcal{B}\) through the world graph, and this path will produce signature \(\{\sigma_{1}\}\{\sigma_{2}\}\ldots\{\sigma_{m}\}\).
(\(\Longleftarrow\)) Assume there exists some satisfying sensor selection of size less than or equal to \(k=|\Sigma|\). Firstly, no sensor may be turned off since doing so would violate the discrimination constraint between the singleton itineraries involving the edge(s) labelled with the disabled sensor's value. Thus, the sensor selection has size exactly \(k\). Secondly, the conflation constraint is also met implying that, for all signatures \(\{\sigma_{1}\}\{\sigma_{2}\}\ldots\{\sigma_{m}\}\) produced by taking \(v_{0}\) to \(q_{0}^{A}\) and ending at some \(v_{i}\in F_{A}\), there exists a path from \(v_{0}\) to \(q_{0}^{B}\) ending at \(v_{j}\in F_{B}\) such that its signature is also \(\{\sigma_{1}\}\{\sigma_{2}\}\ldots\{\sigma_{m}\}\). Since no sensor is turned off, the paths that obtain the signatures in the world graph can be taken in \(\mathcal{A}\) and \(\mathcal{B}\) as well, so \(\sigma_{1}\sigma_{2}\ldots\sigma_{m}\in\mathcal{L}(\mathcal{A})\) implies \(\sigma_{1}\sigma_{2}\ldots\sigma_{m}\in\mathcal{L}(\mathcal{B})\), thus \(\mathcal{L}(\mathcal{A})\subseteq\mathcal{L}(\mathcal{B})\).
**Theorem 1**.: _MSSADDI is \(\mathsf{PSPACE}\)-Complete._
Proof.: Follows from Lemmas 7 and 8.
## VI Algorithm Description
Having proved the theoretical complexity class of MSSADDI, we now turn to a description of the algorithm we used to solve it. Although the algorithm is not polynomial time (as, assuming \(\mathsf{P}\neq\mathsf{PSPACE}\), it couldn't be) we introduce several optimizations to help ameliorate its running time.
### _Baseline Algorithm_
The approach we chose for solving MSSADDI was a complete enumeration of subsets, with some shortcutting. The pseudo-code, based directly on the automata theoretic connections identified in the preceding, appears in Algorithm 1.
```
Inputs: A world graph \(\mathcal{G}=(V,E,\operatorname{src},\operatorname{tgt},v_{0},S,\mathbb{Y},\lambda)\) and a discernment designation \(\mathcal{D}=(I,I_{D},I_{C})\) Output: The minimum satisfying sensor selection, if it exists, otherwise null
1:\(M^{*}\leftarrow\)null\(\triangleright\)The current best sensor set
2:for\(k=|S|\) down to \(0\)do
3:for\(M\) in Combinations\((S,k)\)do
4:for\([\mathcal{I}^{1},\mathcal{I}^{2}]^{\boxtimes}\in I_{D}\)do
5:\(\mathcal{S}_{\mathcal{G},\mathcal{I}^{1,M}}\leftarrow\textsc{SignatureAutomaton}( \mathcal{G},\mathcal{I}^{1},M)\)
6:\(\mathcal{S}_{\mathcal{G},\mathcal{I}^{2},M}\leftarrow\textsc{SignatureAutomaton}( \mathcal{G},\mathcal{I}^{2},M)\)
7:if\(\mathcal{L}(\mathcal{S}_{\mathcal{G},\mathcal{I}^{2},M})\cap\mathcal{L}( \mathcal{S}_{\mathcal{G},\mathcal{I}^{2},M})\neq\varnothing\)then
8:Continue to next \(M\)\(\triangleright\)Check next combination
9:for\((\mathcal{I}_{1},\mathcal{I}_{2})^{\boxplus}\in I_{C}\)do
10:\(\mathcal{S}_{\mathcal{G},\mathcal{I}^{1},M}\leftarrow\textsc{SignatureAutomaton}( \mathcal{G},\mathcal{I}^{1},M)\)
11:\(\mathcal{S}_{\mathcal{G},\mathcal{I}^{2},M}\leftarrow\textsc{SignatureAutomaton}( \mathcal{G},\mathcal{I}^{2},M)\)
12:if\(\mathcal{L}(\mathcal{S}_{\mathcal{G},\mathcal{I}^{2},M})\not\subseteq\mathcal{L}( \mathcal{S}_{\mathcal{G},\mathcal{I}^{2},M})\)then
13:Continue to next \(M\)\(\triangleright\)Check next combination
14:if All \(I_{D}\) and \(I_{C}\) satisfiedthen
15:\(M^{*}\gets M\)
16: Continue to next \(k\)\(\triangleright\)Now try sets of size \(k-1\)
17:if No \(M\) where \(|M|=k\) satisfies all \(I_{D}\)then
18:return\(M^{*}\)\(\triangleright\)Prior solution was smallest feasible one
19:return\(M^{*}\)\(\triangleright\)Final exit
```
**Algorithm 1** Complete Enumeration for MSSADDI
It is a top down search over all subsets of \(S\) where we attempt to check each constraint by constructing its signature automaton and verifying the intersection and subset properties, lines 7 and 12, respectively, as in the previous sections. Discrimination constraints are checked first (lines 4-8) because we expect them to be easier to check than conflation constraints (Lemmas 5 and 6).
We take advantage of one more property of sensor sets in relation to discrimination constraints to define our baseline algorithm. Since we stipulate that different sensors produce different sensor outputs, it follows that if \(M\subseteq S\) does not satisfy a discrimination constraint, then neither can any subset of \(M\). Therefore, when no combination of sensors of size \(k\) satisfies _all_ the discrimination constraints, the search is ended, and the current best satisfying sensor set returned (line 18).
Next, we propose two optimizations over the baseline algorithm just described. While each does involve a different trade-off, neither sacrifices the correctness guarantee.
### _The Caching Optimization_
Notice how the signature automaton is constructed each time an itinerary is encountered in a constraint (lines 5-6 and 10-11). This seems to be wasteful if an itinerary appears in multiple constraints (as it can be with several). The signature automaton can be cached after it is constructed should the same itinerary appear in another constraint, allowing it to be retrieved without the need for additional computation.
Note, however, the trade-off being made here: while the running time reduced, the space requirements increased. Typical library implementations allow for language intersection and subset properties to be checked only on DFA's which, when converted, can result in an exponential increase in space requirements.
### _The Adaptive Weights Optimization_
The second optimization we introduce is a dynamic re-ordering of constraints. Inspired by classical methods in AI for constraint satisfaction problems (CSP's) which seek to make the current assignment _fail fast_, we devised an adaptive weighting mechanism for the desired discernment graph.
Seeking to end the search as fast as possible, discrimination constraints are checked first in the hopes that if none of the sensor sets of cardinality \(k\) satisfies the discrimination constraints, then the search can be declared hopeless and ended immediately. Once a satisfying sensor set is found for the discrimination constraints, though, the following strategy is used. Whenever a particular constraint fails to be satisfied, that sensor set 'votes' the erring constraint up so that future sets know which constraint is likely to fail. Thus, after a few iterations, enough data is collected so that a sensor set checks that constraint first which most of the sets before it failed on. The idea is the more demanding (or stringent) constraints are learned and propagated upward for prioritization.
## VII Experimental results
The following experiments were all performed on a computer running Windows 11 with an Intel i7 CPU having \(16\) GB RAM using Python 3.
As a basic sanity check, we ran the baseline algorithm on the problems presented in Section I. For these problems, the algorithm correctly provided the optimal solutions in less than \(1\,\mathrm{s}\). Next, to test the scalability of the proposed approach and to assess the impact of the optimizations, we ran the experiments that follow.
### _Test cases_
The test cases we propose are designed such that they are parameterized: we use an \(m\times n\) grid-type world graph. An example with \(m=n=3\) is shown in Figure 2, with the scaled versions adding cells rightward and downward (without any missing edges unlike the figure). There is a sensor in each row that registers the fact that agent is present within the associated row. Similarly, a column sensor detects when the agent is within that column. Sensor set \(S\) consists of \(m+n\) sensors, one for each row and each column. The figure shows the labelled world graph, this small instance with \(18\) edges, the arcs each bearing their \(\lambda\)-based labelling. These follow a simple pattern: for example, \(r_{2}^{+}\) means that row 2's sensor has triggered, going from the unoccupied to occupied state; while \(c_{1}^{-}\) means that column \(1\)'s sensor has gone from the occupied to unoccupied.
Finally, we construct an itinerary for every state in the world graph where the language accepted by the DFA for the itinerary describes following any edge in the world graph any number of times followed by an edge incoming to this state. Essentially, the itinerary DFA for that state accepts a string of edges if and only if the last edge that was taken in that walk was an incoming edge to that state.
The number of constraints are proportional to the number of states in the world graph. We add \(mn\) discrimination constraints each by randomly selecting any 2 itineraries which describe ending in two states which are in a different column _and_ in a different row. Similarly, we also add \(m\) conflation constraints per column, each between 2 random itineraries that describe ending in different rows in that column. Thus, in expectation, each itinerary is in 2 discrimination constraints and 2 conflation constraints.
### _Solutions_
From the description of the problem above, it should be clear that activating either only the row sensors or only the column sensors should be a satisfying sensor selection for the discrimination constraints alone. After all, ending in a different row and column can be distinguished on the basis of either information provided by a row sensor or a column sensor. However, when considering both the discrimination and conflation constraints, only one of these options becomes feasible -- namely, that involving only activating the column occupancy sensors. Activating a row sensor could potentially violate some conflation constraints which describe ending in that row. Note that we see another detail of MSSADDI reiterated here -- that when \(n>m\), it may be necessary to activate more sensors (i.e., column sensors as opposed to only the row sensors) to satisfy the both upper and lower information bounds as opposed to the lower bounds alone.
### _Analysis_
The basic scaling plot for various grid sizes is shown in Figure 3. As can be seen in that plot, using the caching optimization alone led on average to a \(53.5\,\mathrm{\char 37}\) reduction in the running time. For our purposes, all the signature automata
were able to be cached, and memory did not seem to be an issue (i.e., we never received an out-of-memory exception). Thus, time, not space, seemed to be a dominating factor in solving this problem with current resources.
The results are even more impressive for the adaptive weights optimization. As compared to the baseline algorithm, it led on average to a \(87.6\,\%\) improvement in running time. When both optimizations are applied together, however, caching the signature automata seems to have little effect when adaptive weights are already in use. This makes sense because the adaptive weights allow a sensor set to be determined as unsatisfiable fast, lowering the probability that the same itinerary will be checked more than once.
Seeking to understand how the mix of constraints checked changes when adaptive weights are used, we decided to analyze the time spent by the algorithm in different parts of the code for the \(6\times 5\) world graph grid. We measured the wall clock every time the algorithm started checking subsets of size \(k\) (see line 2 in Algorithm 1). Furthermore, we also kept count of the number of discrimination and conflation constraints checked for each sensor set aggregated over size \(k\) before it failed. The results, including a visualization of the constraint type, appear in the stepping chart in Figure 4.
Notice, first, how the optimization leads to a greater proportion of conflation constraints being checked. For our case, conflation constraints tend to fail more often when the sensor set is of high cardinality since they are likely to include row sensors. Thus, a greater proportion (or sometimes even absolutely more) of them are checked, as compared to baseline. We see that the decision, on the basis of Lemmas 5 and 6, to place lines 4-8 before lines 4-13 may be mistaken, on average.
Secondly, observe how the algorithm is able to terminate after concluding that no set of size \(k=2\) will satisfy all the discrimination constraints. The minimum satisfying sensor set in this case turned out to be \(3\) column sensors.
## VIII Conclusion and Future Works
This paper tackled the sensor selection problem for multiple itineraries while also allowing for considerations of privacy. We also provided strong reasoning for why merely minimizing selected sensors does not lead to satisfaction of specific privacy requirements. We formulated this problem and proved that it was worst-case intractable. Further, we provided an algorithm (based on automata-theoretic operations) to solve the problem and considered a few optimizations over the naive implementation. In the process, we realized that the gains from those optimizations were significant owing to an inclination for wanting incorrect solutions to fail fast.
In the future, research might seek a direct reduction from the problem we proposed to canonical PSPACE-Complete problems such as QSAT. Other approaches common to solving computationally hard problems such as random algorithms, and improved heuristics may also be fruitful.
### _Acknowledgements_
This material is based upon work supported in part by the National Science Foundation under grant IIS-2034097 and DoD Army Research Office under award W911NF2120064.
| ある有用な能力は、センサー測定のシーケンスまたはトレースを用いて、ある代理の行動を分類することです。センサー選択問題は、生成された際に観測トレースが、代理の活動が特定のパターンに一致するかを決定するための十分な情報を含んでいるように、利用可能なセンサーのサブセットを選択することによって解決されます。本論文では、複数の行動行程が提供されるという一般的な枠組みにおける、センサー選択に関する研究を実施します。これにより、ある種の行動を区別することができます。これは、細かな質問を立てることを可能にし、例えば、代理の活動をスペクトラム上に位置付けることができます。さらに、複数の行程を持つことで、ある種の行動が常に他の行動と混同される可能性があることを確認することができます。センサーの曖昧さを利用して、知識の取得を制限することは、強固なプライバシーの保証であり、一部の以前の研究は、私たちのインター-行程混同アプローチとは異なる枠組みで検討 |
2308.03695 | Quantifiers closed under partial polymorphisms | We study Lindstrom quantifiers that satisfy certain closure properties which
are motivated by the study of polymorphisms in the context of constraint
satisfaction problems (CSP). When the algebra of polymorphisms of a finite
structure B satisfies certain equations, this gives rise to a natural closure
condition on the class of structures that map homomorphically to B. The
collection of quantifiers that satisfy closure conditions arising from a fixed
set of equations are rather more general than those arising as CSP. For any
such conditions P, we define a pebble game that delimits the distinguishing
power of the infinitary logic with all quantifiers that are P-closed. We use
the pebble game to show that the problem of deciding whether a system of linear
equations is solvable in Z2 is not expressible in the infinitary logic with all
quantifiers closed under a near-unanimity condition. | Anuj Dawar, Lauri Hella | 2023-08-07T16:12:31 | http://arxiv.org/abs/2308.03695v1 | # Quantifiers closed under partial polymorphisms
###### Abstract
We study Lindstrom quantifiers that satisfy certain closure properties which are motivated by the study of polymorphisms in the context of constraint satisfaction problems (CSP). When the algebra of polymorphisms of a finite structure \(\mathfrak{B}\) satisfies certain equations, this gives rise to a natural closure condition on the class of structures that map homomorphically to \(\mathfrak{B}\). The collection of quantifiers that satisfy closure conditions arising from a fixed set of equations are rather more general than those arising as CSP. For any such conditions \(\mathcal{P}\), we define a pebble game that delimits the distinguishing power of the infinitary logic with all quantifiers that are \(\mathcal{P}\)-closed. We use the pebble game to show that the problem of deciding whether a system of linear equations is solvable in \(\mathbb{Z}/2\mathbb{Z}\) is not expressible in the infinitary logic with all quantifiers closed under a near-unanimity condition.
## 1 Introduction
Generalized quantifiers, also known as Lindstrom quantifiers, have played a significant role in the development of finite model theory. The subject of finite model theory is the expressive power of logics in the finite, and Lindstrom quantifiers provide a very general and abstract method of constructing logics. We can associate with any isomorphism-closed class of structures \(\mathcal{K}\), a quantifier \(Q_{\mathcal{K}}\) so that the extension \(L(Q_{\mathcal{K}})\) of a logic \(L\) with the quantifier \(Q_{\mathcal{K}}\) is the _minimal_ extension of \(L\) that can express the class \(\mathcal{K}\), subject to certain natural closure conditions. For this reason, comparing the expressive power of logics with Lindstrom quantifiers is closely related to comparing the descriptive complexity of the underlying classes of structures.
Another reason for the significance of Lindstrom quantifiers is that we have powerful methods for proving inexpressibility in logics with such quantifiers. In particular, games, based on Hella's bijection games [14], are the basis of the most common inexpressivity results that have been obtained in finite model theory. The \(k,n\)-bijection game was introduced by Hella to characterize equivalence in the logic \(L^{k}_{\infty\omega}(\mathbf{Q}_{n})\), which is the extension of the infinitary logic with \(k\) variables by means of all \(n\)-ary Lindstrom quantifiers. A quantifier \(Q_{\mathcal{K}}\) is \(n\)-ary if the class \(\mathcal{K}\) is defined over a vocabulary \(\sigma\) in which all relation symbols have arity \(n\) or less. In particular, the \(k,1\)-bijection game, often called the \(k\)-pebble bijection game, characterizes equivalence in \(L^{k}_{\infty\omega}(\mathbf{Q}_{1})\) which has the same expressive power as \(C^{k}_{\infty\omega}\), the \(k\)-variable infinitary logic with counting. Hella uses the \(k,n\)-bijection game to show that, for each \(n\), there is an \((n+1)\)-ary quantifier that is not definable in \(L^{k}_{\infty\omega}(\mathbf{Q}_{n})\) for any \(k\).
The \(k,1\)-bijection game has been widely used to establish inexpressibility results for \(C^{k}_{\infty\omega}\). The \(k,n\)-bijection game for \(n>1\) has received relatively less attention. One reason is that, while equivalence in \(C^{k}_{\infty\omega}\) is a polynomial-time decidable relation, which is in fact a relation much studied on graphs in the form of the Weisfeiler-Leman algorithm, in contrast the relation induced by the \(k,n\)-bijection game for \(n>1\) reduces to isomorphism on graphs and is intractable in general. Nonetheless, there is some interest in studying, for example, the non-trivial equivalence induced by \(L^{k}_{\infty\omega}({\bf Q}_{2})\) on structures with a ternary relation. Grochow and Levet [13] investigate this relation on finite groups.
A second reason why the logics \(L^{\omega}_{\infty\omega}({\bf Q}_{n})\) have attracted less interest is that in finite model theory we are often interested in logics that are closed under vectorized first-order interpretations. This is especially so in descriptive complexity as the complexity classes we are trying to characterize usually have these closure properties. While \(L^{\omega}_{\infty\omega}({\bf Q}_{1})\) is closed under first-order interpretations, this is not the case for \(L^{\omega}_{\infty\omega}({\bf Q}_{n})\) for \(n>1\). Indeed, the closure of \(L^{\omega}_{\infty\omega}({\bf Q}_{2})\) under interpretations already includes \({\bf Q}_{n}\) for all \(n\) and so can express all properties of finite structures. So, it seems that beyond \(L^{\omega}_{\infty\omega}({\bf Q}_{1})\), interesting logics from the point of view of complexity necessarily include quantifiers of all arities.
One way of getting meaningful logics that include quantifiers of unbounded arity is to consider quantifiers restricted to stronger closure conditions than just closure under isomorphisms. In recent work, novel game-based methods have established new inexpresibilty results for such logics, i.e. logics with a wide class of quantifiers of unbounded arity, but satisfying further restrictions. An important example is the class of linear-algebraic quantifiers, introduced in [6] which is the closure under interpretations of binary quantifiers invariant under invertible linear maps over finite fields. Equivalence in the resulting logic is characterized by the invertible map games introduced in [8]. These games are used in a highly sophisticated way by Lichter [17] to demonstrate a polynomial-time property that is not definable in fixed-point logic with rank introduced in [7, 12]. The result is extended to the infinitary logic with all linear-algebraic quantifiers in [5].
Another example is the recent result of Hella [15] showing a hierarchy theorem for quantifiers based on _constraint satisfaction problems_ (CSP), using a novel game. Recall that for a fixed relational structure \(\mathfrak{B}\), \(\mathsf{CSP}(\mathfrak{B})\) denotes the class of structures that map homomorphically to \(\mathfrak{B}\). Hella establishes that, for each \(n>1\), there is a structure \(\mathfrak{B}\) with \(n+1\) elements that is not definable in \(L^{\omega}_{\infty\omega}({\bf Q}_{1},\mathsf{CSP}_{n})\), where \(\mathsf{CSP}_{n}\) denotes the collection of all quantifiers of the form \(Q_{\mathsf{CSP}(\mathfrak{B}^{\prime})}\) where \(\mathfrak{B}^{\prime}\) has at most \(n\) elements. Note that \(\mathsf{CSP}_{n}\) includes quantifiers of all arities.
The interest in CSP quantifiers is inspired by the great progress that has been made in classifying constraint satisfaction problems in recent years, resulting in the dichotomy theorem of Bulatov and Zhuk [3, 18]. The so-called algebraic approach to the classification of CSP has shown that the complexity of \(\mathsf{CSP}(\mathfrak{B})\) is completely determined by the algebra of polymorphisms of the structure \(\mathfrak{B}\). In particular, the complexity is completely determined by the equational theory of this algebra. As we make explicit in Section 3 below, equations satisfied by the polymorphisms of \(\mathfrak{B}\) naturally give rise to certain closure properties for the class of structures \(\mathsf{CSP}(\mathfrak{B})\), which we describe by _partial polymorphisms_.
A central aim of the present paper is to initiate the study of quantifiers closed under partial polymorphisms. We present a Spoiler-Duplicator pebble game, based on bijection games, which exactly characterises the expressive power of such quantifiers. More precisely, there is such a game for any suitable family \(\mathcal{P}\) of partial polymorphisms. The exact definition of the game and the proof of the characterization are given in Section 4.
As a case study, we consider the partial polymorphisms described by a _near-unanimity_ condition. It is known since the seminal work of Feder and Vardi [11] that if a structure \(\mathfrak{B}\) admits a near-unanimity polymorphism, then \(\mathsf{CSP}(\mathfrak{B})\) has _bounded width_, i.e. it (or more precisely, its complement) is definable in Datalog. On the other hand, the problem of determining the solvability of a system of equations over the two-element field \(\mathbb{Z}\,/\,2\,\mathbb{Z}\) is the classic example of a tractable CSP that is not of bounded width. Indeed, it is not even definable in \(C^{\omega}_{\infty\omega}\)[1]. We show that the collection of quantifiers that are closed under near-unanimity partial polymorphisms is much richer than the classes \(\mathsf{CSP}(\mathfrak{B})\) where \(\mathfrak{B}\) has a near-unanimity polymorphism. The collection not only includes quantifiers which are
not CSP, but it also includes CSP quantifiers which are not of bounded width, including intractable ones such as hypergraph colourability. Still, we are able to show that the problem of solving systems of equations over \(\mathbb{Z}\,/\,2\,\mathbb{Z}\) is not definable in the extension of \(C^{\omega}_{\infty\omega}\) with _all_ quantifiers closed under near-unanimity partial polymorphisms. This sheds new light on the inter-definability of constraint satisfaction problems. For instance, while it follows from the arity hierarchy of [14] that the extension of \(C^{\omega}_{\infty\omega}\) with a quantifier for graph \(3\)-colourability still cannot define solvability of systems of equations over \(\mathbb{Z}\,/\,2\,\mathbb{Z}\), our result shows this also for the extension of \(C^{\omega}_{\infty\omega}\) with all hypergraph colourability quantifiers.
## 2 Preliminaries
We assume basic familiarity with logic, and in particular the logics commonly used in finite model theory (see [9], for example). We write \(L^{k}_{\infty\omega}\) to denote the infinitary logic (that is, the closure of first-order logic with infinitary conjunctions and disjunctions) with \(k\) variables and \(L^{\omega}_{\infty\omega}\) for \(\bigcup_{k\in\omega}L^{k}_{\infty\omega}\). We are mainly interested in the extensions of these logics with generalized quantifiers, which we introduce in more detail in Section 2.1 below.
We use Fraktur letters \(\mathfrak{A},\mathfrak{B},\ldots\) to denote structures and the corresponding Roman letters \(A,B,\ldots\) to denote their universes. Unless otherwise mentioned, all structures are assumed to be finite. We use function notation, e.g. \(f:A\to B\) to denote possibly _partial_ functions. If \(f:A\to B\) is a function and \(\vec{a}\in A^{m}\) a tuple, we write \(f(\vec{a})\) for the tuple in \(B^{m}\) obtained by applying \(f\) to \(\vec{a}\) componentwise. If \(\vec{a}_{1},\ldots,\vec{a}_{n}\) is a sequnce of \(m\)-tuples, write \((\vec{a}_{1},\ldots,\vec{a}_{n})^{T}\) for the sequence \(\vec{b}_{1},\ldots,\vec{b}_{m}\) of \(n\)-tuples, where \(\vec{b}_{i}\) is the tuple of \(i\)th components of \(\vec{a}_{1},\ldots,\vec{a}_{n}\). Given a function \(f:A^{n}\to B\), we write \(\hat{f}(\vec{a}_{1},\ldots,\vec{a}_{n})\) to denote \(f((\vec{a}_{1},\ldots,\vec{a}_{n})^{T})=(f(\vec{b}_{1}),\ldots,f(\vec{b}_{m}))\).
For a pair of structures \(\mathfrak{A}\) and \(\mathfrak{B}\), a _partial isomorphism_ from \(\mathfrak{A}\) to \(\mathfrak{B}\) is a partial function \(f:A\to B\) which is an isomorphism between the substructure of \(\mathfrak{A}\) induced by the domain of \(f\) and the substructure of \(\mathfrak{B}\) induced by the image of \(f\). We write \(\operatorname{PI}(\mathfrak{A},\mathfrak{B})\) to denote the collection of all partial isomorphisms from \(\mathfrak{A}\) to \(\mathfrak{B}\).
We write \(\mathbb{N}\) or \(\omega\) to denote the natural numbers, and \(\mathbb{Z}\) to denote the ring of integers. For any \(n\in\mathbb{N}\), we write \([n]\) to denote the set \(\{1,\ldots,n\}\). When mentioned without further qualification, a graph \(G=(V,E)\) is simple and undirected. That is, it is a structure with universe \(V\) and one binary relation \(E\) that is irreflexive and symmetric. The _girth_ of a graph \(G\) is the length of the shortest cycle in \(G\).
### Generalized quantifiers
Let \(\sigma,\tau\) be relational vocabularies with \(\tau=\{R_{1},\ldots,R_{m}\}\), and \(\operatorname{ar}(R_{i})=r_{i}\) for each \(i\in[m]\). An interpretation \(\mathcal{I}\) of \(\tau\) in \(\sigma\) with parameters \(\vec{z}\) is a tuple of \(\sigma\)-formulas \((\psi_{1},\ldots,\psi_{m})\) along with tuples \(\vec{y}_{1},\ldots,\vec{y}_{m}\) of variables with \(|\vec{y}_{i}|=r_{i}\) for \(i\in[m]\), such that the free variables of \(\psi_{i}\) are among \(\vec{y}_{i}\vec{z}\). Such an interpretation defines a mapping that takes a \(\sigma\)-structure \(\mathfrak{A}\), along with an interpretation \(\alpha\) of the parameters \(\vec{z}\) in \(\mathfrak{A}\) to a \(\tau\)-structure \(\mathfrak{B}\) as follows. The universe of \(\mathfrak{B}\) is \(A\), and the relations \(R_{i}\in\tau\) are interpreted in \(\mathfrak{B}\) by \(R_{i}^{\mathfrak{B}}=\{\vec{b}\in A^{r_{i}}\mid(\mathfrak{A},\alpha[\vec{b}/ \vec{y}_{i}])\models\psi_{i}\}\).
Let \(L\) be a logic and \(\mathcal{K}\) a class of \(\tau\)-structures. The extension \(L(Q_{\mathcal{K}})\) of \(L\) by the _generalized quantifier_ for the class \(\mathcal{K}\) is obtained by extending the syntax of \(L\) by the following formula formation rule:
For \(\mathcal{I}=(\psi_{1},\ldots,\psi_{m})\) an interpretation of \(\tau\) in \(\sigma\) with parameters \(\vec{z}\), \(\psi(\vec{z})=Q_{\mathcal{K}}\vec{y}_{1},\ldots,\vec{y}_{m}\mathcal{I}\) is a formula over the signature \(\sigma\), with free variables \(\vec{z}\). The semantics of the formula is given by \((\mathfrak{A},\alpha)\models\psi(\vec{z})\), if, and only if, \(\mathfrak{B}:=\mathcal{I}(\mathfrak{A},\alpha)\) is in the class \(\mathcal{K}\).
The extension \(L(\mathbf{Q})\) of \(L\) by a collection \(\mathbf{Q}\) of generalized quantifiers is defined by adding the rules above to \(L\) for each \(Q_{\mathcal{K}}\in\mathbf{Q}\) separately.
The _type_ of the quantifier \(Q_{\mathcal{X}}\) is \((r_{1},\ldots,r_{m})\), and the _arity_ of \(Q_{\mathcal{X}}\) is \(\max\{r_{1},\ldots,r_{m}\}\). For the sake of simplicity, we assume in the sequel that the type of \(Q_{\mathcal{X}}\) is _uniform_, i.e., \(r_{i}=r_{j}\) for all \(i,j\in[m]\). This is no loss of generality, since any quantifier \(Q_{\mathcal{X}}\) is definably equivalent with another quantifier \(Q_{\mathcal{X}^{\prime}}\) of uniform type with the same arity. Furthermore, we restrict the syntactic rule of \(Q_{\mathcal{X}}\) by requiring that \(\vec{y}_{i}=\vec{y}_{j}\) for all \(i,j\in[m]\). Then we can denote the formula obtained by applying the rule simply by \(\varphi=Q_{\mathcal{X}}\vec{y}\,(\psi_{1},\ldots,\psi_{m})\). Note however, that this convention disallows formulas of the type \(\theta=Qx,y\,(R(x,y),R(y,x))\) in which both \(x\) and \(y\) remain free even though \(x\) is bound in \(R(x,y)\) and \(y\) is bound in \(R(y,x)\), and hence weakens the expressive power of \(\mathrm{FO}^{k}(Q_{\mathcal{X}})\) and \(L^{k}_{\infty\omega}(Q_{\mathcal{X}})\). Fortunately the loss can be compensated by using more variables (e.g., \(\theta\) is equivalent with \(Q_{\mathcal{Z}}\,(R(z,y),R(z,x))\), whence the restriction does not affect the expressive power of \(\mathrm{FO}(Q_{\mathcal{X}})\) and \(L^{\omega}_{\infty\omega}(Q_{\mathcal{X}})\).
Let \(Q=Q_{\mathcal{X}}\) and \(Q^{\prime}=Q_{\mathcal{X}^{\prime}}\) be generalized quantifiers. We say that \(Q\) is _definable_ in \(L(Q^{\prime})\) if the defining class \(\mathcal{X}\) is definable in \(L(Q^{\prime})\), i.e., there is a sentence \(\varphi\) of \(L(Q^{\prime})\) such that \(\mathcal{K}=\{\mathfrak{A}\mid\mathfrak{A}\upharpoonright\varphi\}\).
We write \(\mathbf{Q}_{n}\) to denote the collection of all quantifiers of arity at most \(n\). Hella [14] shows that for any \(n\), there is a quantifier of arity \(n+1\) that is not definable in \(L^{\omega}_{\infty\omega}(\mathbf{Q}_{n})\). The logic \(L^{\omega}_{\infty\omega}(\mathbf{Q}_{1})\) is equivalent to \(C^{\omega}_{\infty\omega}\), the infinitary logic with counting. The notion of interpretation we have defined is fairly restricted in that it does not allow for _relativization_ or _vectorizations_ (see, e.g. [9, Def. 12.3.6]. The relativizations and vectorizations of a quantifier \(Q\) can always be seen as a _collection_ of simple quantifiers of unbounded arity.
### CSP and polymorphisms
Given relational structures \(\mathfrak{A}\) and \(\mathfrak{B}\) over the same vocabulary \(\tau\), a _homomorphism_\(h:\mathfrak{A}\to\mathfrak{B}\) is a function that takes elements of \(A\) to elements of \(B\) and such that for every \(R\in\tau\) of arity \(r\) and any \(\vec{a}\in A^{\tau}\), \(\vec{a}\in R^{\mathfrak{A}}\) implies \(h(\vec{a})\in R^{\mathfrak{B}}\). For a fixed structure \(\mathfrak{B}\), we write \(\mathsf{CSP}(\mathfrak{B})\) to denote the collection of structures \(\mathfrak{A}\) for which there is some homomorphism \(h:\mathfrak{A}\to\mathfrak{B}\). By the celebrated theorem of Bulatov and Zhuk, every class \(\mathsf{CSP}(\mathfrak{B})\) is either decidable in polynomial time or NP-complete.
Given a \(\tau\)-structure \(\mathfrak{B}\) and \(m\in\mathbb{N}\), we define a \(\tau\)-structure \(\mathfrak{B}^{m}\). Its universe is \(B^{m}\) and if \(R\) in \(\tau\) is a relation of arity \(r\), and \(\vec{a}_{i}=(a_{i}^{1},\ldots,a_{i}^{m})\) is an \(m\)-tuple of elements of \(B\), for each \(i\in[r]\), then \((\vec{a}_{1},\ldots,\vec{a}_{r})\in R^{\mathfrak{B}^{m}}\) if, and only if, for each \(j\in[m]\), \((a_{1}^{j},\ldots,a_{r}^{j})\in R^{\mathfrak{B}}\). Then, a _polymorphism_ of \(\mathfrak{B}\) is a homomorphism \(p:\mathfrak{B}^{m}\to\mathfrak{B}\) for some \(m\). The collection of polymorphisms of \(\mathfrak{B}\) forms an algebraic _clone_ with universe \(B\). It is known that the equational theory of this algebra completely determines the computational complexity of \(\mathsf{CSP}(\mathfrak{B})\) (see [2] for an expository account).
A function \(m:B^{3}\to B\) is a _majority_ function if it satisfies the equations \(m(a,a,b)=m(a,b,a)=m(b,a,a)=a\) for all \(a,b\in B\). More generally, for \(\ell\geq 3\), a function \(n:B^{\ell}\to B\) is a _near-unanimity_ function of arity \(\ell\) if for any \(\ell\)-tuple \(\vec{a}\), we have \(n(\vec{a})=a\) whenever at least \(\ell-1\) components of \(\vec{a}\) are \(a\). In particular, a near-unanimity function of arity \(3\) is a majority function. A function \(M:B^{3}\to B\) is a _Maltsev_ function if it satisfies the identities \(M(a,b,b)=M(b,b,a)=a\) for all \(a,b\in B\).
For any structure \(\mathfrak{B}\) which has a near-unanimity polymorphism, the class \(\mathsf{CSP}(\mathfrak{B})\) is decidable in polynomial time, and definable in \(L^{\omega}_{\infty\omega}\). If \(\mathfrak{B}\) admits a Maltsev polymorphism, then \(\mathsf{CSP}(\mathfrak{B})\) is also decidable in polynomial time, but may not be definable in \(L^{\omega}_{\infty\omega}\) or \(L^{\omega}_{\infty\omega}(\mathbf{Q}_{1})\), its extension with all unary quantifiers. The classic example of a CSP with a Maltsev polymorphism that is not definable in \(L^{\omega}_{\infty\omega}(\mathbf{Q}_{1})\) is solvability of systems of equations over \(\mathbb{Z}\,/\,2\,\mathbb{Z}\) with \(\ell\) variables per equation. We can treat this as the class of structures \(\mathsf{CSP}(\mathfrak{C}_{\ell})\) where \(\mathfrak{C}_{\ell}\) is the structure with universe \(\{0,1\}\) and two \(\ell\)-ary relations \(R_{0}=\{(b_{1},\ldots,b_{\ell})\mid\sum_{i}b_{i}\equiv 0\pmod{2}\}\) and \(R_{1}=\{(b_{1},\ldots,b_{\ell})\mid\sum_{i}b_{i}\equiv 1\pmod{2}\}\).
If \(\mathcal{K}=\mathsf{CSP}(\mathfrak{B})\) for some fixed structure \(\mathfrak{B}\), we call \(Q_{\mathcal{X}}\) a _CSP quantifier_. Write \(\mathsf{CSP}_{n}\) for the collection of all CSP quantifiers \(Q_{\mathcal{X}}\) where \(\mathcal{K}=\mathsf{CSP}(\mathfrak{B})\) for a structure with at most \(n\) elements. Note that \(\mathsf{CSP}_{n}\) contains quantifiers of all arities. Hella [15] defines a pebble game that characterizes equivalence of structures in the logic \(L^{\omega}_{\infty\omega}(\mathbf{Q}_{1},\mathsf{CSP}_{n})\) and shows that there is a structure \(\mathfrak{B}\) on \(n+1\) elements such that \(\mathsf{CSP}(\mathfrak{B})\) is not definable in this logic.
## 3 Partial polymorphisms
Let \(\tau\) be a relational vocabulary, and let \(\mathfrak{C}\) be a \(\tau\)-structure with a polymorphism \(p\colon\mathfrak{C}^{n}\to\mathfrak{C}\). This gives rise to a closure condition on the class \(\mathsf{CSP}(\mathfrak{C})\). In particular, suppose \(\mathfrak{B}\in\mathsf{CSP}(\mathfrak{C})\) by a homomorphism \(h:\mathfrak{B}\to\mathfrak{C}\). We can, in a sense, "close" \(\mathfrak{B}\) under the polymorphism \(p\) by including in each relation \(R^{\mathfrak{B}}\) (\(R\in\tau\)) any tuple \(\vec{a}\) for which \(h(\vec{a})=p(h(\vec{a}_{1},\ldots,\vec{a}_{n}))\) for some \(\vec{a}_{1},\ldots,\vec{a}_{n}\in R^{\mathfrak{B}}_{i}\). The resulting structure \(\mathfrak{B}^{\prime}\) is still in \(\mathsf{CSP}(\mathfrak{C})\) as is any structure \(\mathfrak{A}\) with the same universe as \(\mathfrak{B}\) and for which \(R^{\mathfrak{A}}\subseteq R^{\mathfrak{B}^{\prime}}\) for all \(R\in\tau\).
Our aim is to generalize this type of closure properties from CSP quantifiers to a larger class of generalized quantifiers. To formally define this, it is useful to introduce some notation. For reasons that will become clear, we use _partial_ functions \(p\).
**Definition 1**: _Let \(A\neq\emptyset\) be a set, and let \(p\) be a be a partial function \(A^{n}\to A\)._
_(a) If \(R\subseteq A^{r}\), then \(p(R):=\{\hat{p}(\vec{a}_{1},\ldots,\vec{a}_{n})\mid\vec{a}_{1},\ldots,\vec{a} _{n}\in R\}\)._
_(b) If \(\mathfrak{A}=(A,R^{\mathfrak{A}}_{1},\ldots,R^{\mathfrak{A}}_{m})\), then we denote the structure \((A,p(R^{\mathfrak{A}}_{1}),\ldots,p(R^{\mathfrak{A}}_{m}))\) by \(p(\mathfrak{A})\)._
We say that \(p\) is a _partial polymorphism_ of a \(\tau\)-structure \(\mathfrak{A}\) with domain \(A\) if for every \(R\in\tau\), the relation \(R^{\mathfrak{A}}\) is closed with respect to \(p\), i.e., \(p(R^{\mathfrak{A}})\subseteq R^{\mathfrak{A}}\).
The reason for considering partial functions is that we are usually interested in polymorphisms that satisfy certain equations. The equations specify the polymorphism partially, but not totally. We can uniformly specify closure properties on our class of structures for all polymorphisms satisfying the equations by only requiring closure for the common partial function. This is illustrated in the examples below.
By a _family of partial functions_ we mean a class \(\mathcal{P}\) that contains a partial function \(p_{A}\colon A^{n}\to A\) for every finite set \(A\), where \(n\) is a fixed positive integer. We give next some important examples of families of partial functions that arise naturally from well-known classes of polymorphisms.
**Example 2**: _(a) The Maltsev family \(\mathcal{M}\) consists of the partial functions \(M_{A}\colon A^{3}\to A\) such that \(M_{A}(a,b,b)=M_{A}(b,b,a)=a\) for all \(a,b\in A\), and \(M_{A}(a,b,c)\) is undefined unless \(a=b\) or \(b=c\). If \(\mathfrak{A}\) has a Maltsev polymorphism \(p\colon A^{3}\to A\), then clearly \(M_{A}\) is a restriction of \(p\), whence it is a partial polymorphism of \(\mathfrak{A}\)._
_(b) The family \(\mathcal{M}\) of ternary partial majority functions consists of the partial functions \(m_{A}\colon A^{3}\to A\) such that \(m_{A}(a,a,b)=m_{A}(a,b,a)=m_{A}(b,a,a)=a\) for all \(a,b\in A\), and \(m_{A}(a,b,c)\) is undefined if \(a,b\) and \(c\) are all distinct. If \(\mathfrak{A}\) has a majority polymorphism, then \(m_{A}\) is a restriction of it, whence it is a partial polymorphism of \(\mathfrak{A}\)._
_(c) More generally, for each \(\ell\geq 3\) we define the family \(\mathcal{N}_{\ell}\) of \(\ell\)-ary partial near-unanimity functions \(n^{\ell}_{A}\colon A^{\ell}\to A\) as follows:_
* \(n^{\ell}_{A}(a_{1},\ldots,a_{\ell})=a\) _if and only if_ \(|\{i\in[n]\mid a_{i}=a\}|\geq\ell-1\)_._
_In particular, \(\mathcal{M}\mathcal{G}=\mathcal{N}_{3}\)._
We next give a formal definition for the closure property of generalized quantifiers that arises from a family of partial functions. In the definition we use the notation \(\mathfrak{A}\leq\mathfrak{B}\) if \(\mathfrak{A}\) and \(\mathfrak{B}\) are \(\tau\)-structures such that \(A=B\) and \(R^{\mathfrak{A}}\subseteq R^{\mathfrak{B}}\) for each \(R\in\tau\). Furthermore, we define the union \(\mathfrak{A}\cup\mathfrak{B}\) of \(\mathfrak{A}\) and \(\mathfrak{B}\) to be the \(\tau\)-structure \(\mathfrak{C}\) such that \(C=A\cup B\) and \(R^{\mathfrak{C}}=R^{\mathfrak{A}}\cup R^{\mathfrak{B}}\) for each \(R\in\tau\).
**Definition 3**: _Let \(\mathcal{P}\) be a family of \(n\)-ary partial functions, and let \(Q_{\mathcal{X}}\) be a generalized quantifier of vocabulary \(\tau\). We say that \(Q_{\mathcal{X}}\) is \(\mathcal{P}\)-closed if the following holds for all \(\tau\)-tructures \(\mathfrak{A}\) and \(\mathfrak{B}\) with \(A=B\):_
* _if_ \(\mathfrak{B}\in\mathcal{K}\) _and_ \(\mathfrak{A}\leq p_{A}(\mathfrak{B})\cup\mathfrak{B}\)_, then_ \(\mathfrak{A}\in\mathcal{K}\)_._
_We denote the class of all \(\mathcal{P}\)-closed quantifiers by \(\mathbf{Q}_{\mathcal{P}}\)._
Note that the condition \(\mathfrak{A}\leq p_{A}(\mathfrak{B})\cup\mathfrak{B}\) holds if and only if for every \(R\in\tau\) and every \(\vec{a}\in R^{\mathfrak{A}}\setminus R^{\mathfrak{B}}\) there are tuples \(\vec{a}_{1},\ldots,\vec{a}_{n}\in R^{\mathfrak{B}}\) such that \(\vec{a}=\widehat{p_{A}}(\vec{a}_{1},\ldots,\vec{a}_{n})\).
The quantifier \(Q_{\mathcal{K}}\) is _downwards monotone_, if \(\mathfrak{A}\leq\mathfrak{B}\) and \(\mathfrak{B}\in\mathcal{K}\) implies \(\mathfrak{A}\in\mathcal{K}\). It follows directly from Definition 3 that all \(\mathcal{P}\)-closed quantifiers are downwards monotone.
**Proposition 4**: _If \(Q_{\mathcal{K}}\in\mathbf{Q}_{\mathcal{P}}\) for some family \(\mathcal{P}\), then \(Q_{\mathcal{K}}\) is downwards monotone._
It is easy to see that, for any family \(\mathcal{P}\), the first-order quantifiers can be defined from a \(\mathcal{P}\)-closed quantifier using only negation.
**Proposition 5**: _Let \(\mathcal{K}_{0}\) be the class of all \(\{P\}\)-structures \(\mathfrak{A}\) such that \(P^{\mathfrak{A}}=\emptyset\). Then \(Q_{\mathcal{K}_{0}}\in\mathbf{Q}_{\mathcal{P}}\) for any family \(\mathcal{P}\) of partial functions._
_Proof._ If \(\mathfrak{B}\in\mathcal{K}_{0}\), then \(P^{\mathfrak{B}}=\emptyset\), whence \(p_{B}(\mathfrak{B})=\emptyset\). Thus, if \(\mathfrak{A}\leq p_{B}(\mathfrak{B})\cup\mathfrak{B}\), then \(P^{\mathfrak{A}}=\emptyset\), and hence \(\mathfrak{A}\in\mathcal{K}_{0}\). \(\Box\)
Note that in the case \(\operatorname{ar}(P)=1\), the quantifier \(Q_{\mathcal{K}_{0}}\) of the proposition above is the negation of the existential quantifier: \(\mathfrak{A}\models Q_{\mathcal{K}_{0}}x\,\varphi\iff\mathfrak{A}\models\neg \exists x\,\varphi\).
Up to now we have not imposed any restrictions on the family \(\mathcal{P}\). It is natural to require that the partial functions in \(\mathcal{P}\) are uniformly defined, or at least that \((A,p_{A})\) and \((B,p_{B})\) are isomorphic if \(|A|=|B|\). Such requirements are captured by the notions defined below.
**Definition 6**: _Let \(\mathcal{P}\) be a family of \(n\)-ary partial functions._
_(a) \(\mathcal{P}\) is invariant if it respects bijections: if \(f\colon A\to B\) is a bijection and \(a_{1},\ldots,a_{n}\in A\), then \(p_{B}(f(a_{1}),\ldots,f(a_{n}))\simeq f(p_{A}(a_{1},\ldots,a_{n}))\). Here the symbol \(\simeq\) says that either both sides are defined and have the same value, or both sides are undefined._
_(b) \(\mathcal{P}\) is strongly invariant if it respects injections: if \(f\colon A\to B\) is an injection and \(a_{1},\ldots,a_{n}\in A\), then \(p_{B}(f(a_{1}),\ldots,f(a_{n}))\simeq f(p_{A}(a_{1},\ldots,a_{n}))\)._
_(c) \(\mathcal{P}\) is projective, if it strongly invariant and it is preserved by all functions: if \(f\colon A\to B\) is a function and \(a_{1},\ldots,a_{n}\in A\) are such that \(p_{A}(a_{1},\ldots,a_{n})\) is defined, then \(p_{B}(f(a_{1}),\ldots,f(a_{n}))=f(p_{A}(a_{1},\ldots,a_{n}))\)._
It is easy to verify that \(\mathcal{P}\) is invariant if, and only if, it is determined by equality types on each cardinality: there are quantifier free formulas in the language of equality \(\theta_{\mathcal{P}}^{m}(\vec{x},y)\) such that if \(|A|=m\), then \(p_{A}(\vec{a})=b\iff A\models\theta_{\mathcal{P}}^{m}[\vec{a}/\vec{x},b/y]\) holds for all \(\vec{a}\in A^{n}\) and \(b\in A\). Similarly, \(\mathcal{P}\) is strongly invariant if, and only if, the same holds with a single formula \(\theta_{\mathcal{P}}=\theta_{\mathcal{P}}^{m}\) for all \(m\in\omega\).
Note that if the family \(\mathcal{P}\) is strongly invariant, then for every finite set \(A\), \(p_{A}\) is a _partial choice function_, i.e., \(p_{A}(a_{1},\ldots,a_{n})\in\{a_{1},\ldots,a_{n}\}\). Indeed, if \(b:=p_{A}(a_{1},\ldots,a_{n})\not\in\{a_{1},\ldots,a_{n}\}\) and \(B=A\cup\{c\}\), where \(c\notin A\), then using the identity function \(f=\operatorname{id}_{A}\) of \(A\) in the condition \(p_{B}(f(a_{1}),\ldots,f(a_{n}))=f(p_{A}(a_{1},\ldots,a_{n}))\), we get \(p_{B}(a_{1},\ldots,a_{n})=b\). On the other hand, using the injection \(f^{\prime}\colon A\to B\) that agrees with \(\operatorname{id}_{A}\) on \(A\setminus\{b\}\) but maps \(b\) to \(c\), we get the contradiction \(p_{B}(a_{1},\ldots,a_{n})=c\neq b\).
**Remark 7**: _An invariant family may contain functions \(p_{A}\) that are not partial choice functions: for example the family consisting of all functions \(p_{A}\colon A^{n}\to A\) such that \(p_{A}(a_{1},\ldots,a_{n})=a_{n+1}\iff A\setminus\{a_{1},\ldots,a_{n}\}=\{a_{n +1}\}\) is invariant. However, if \(|A|>n+1\), then \(p_{A}\) is necessarily a partial choice function._
**Lemma 8**: _Let \(\mathcal{P}\) be a family of \(n\)-ary partial choice functions. Then \(Q_{\mathcal{K}}\in\mathbf{Q}_{\mathcal{P}}\) for any unary downwards monotone quantifier \(Q_{\mathcal{K}}\). In particular this holds if \(\mathcal{P}\) is strongly invariant._
_Proof._ Let \(\tau\) be the vocabulary of \(\mathcal{K}\), and assume that \(\mathfrak{B}\in\mathcal{K}\) and \(\mathfrak{A}\leq p_{A}(\mathfrak{B})\cup\mathfrak{B}\). Then for all \(R\in\tau\) and \(a\in R^{\mathfrak{A}}\setminus R^{\mathfrak{B}}\) there are \(a_{1},\ldots,a_{n}\in A\) such that \(p_{A}(a_{1},\ldots,a_{n})=a\) and \(a_{i}\in R^{\mathfrak{B}}\) for each \(i\in[n]\). Since \(p_{A}\) is a choice function, we have \(a\in\{a_{1},\ldots,a_{n}\}\), and hence \(a\in R^{\mathfrak{B}}\). Thus we see that
\(\mathfrak{A}\leq\mathfrak{B}\), and consequently \(\mathfrak{A}\in\mathcal{K}\), since \(Q_{\mathcal{K}}\) is downwards monotone. \(\Box\)
It is easy to see that the families \(\mathcal{M}\) and \(\mathcal{N}_{\ell}\), \(\ell\geq 3\), introduced in Example 2, are strongly invariant. Indeed, the defining formulas \(\theta_{\mathcal{M}}\) and \(\theta_{\mathcal{N}_{\ell}}\) are easily obtained from the identities that define these conditions. Thus, all unary downwards monotone quantifiers are \(\mathcal{M}\)-closed and \(\mathcal{N}_{\ell}\)-closed. For the families \(\mathcal{N}_{\ell}\) we can prove a much stronger result:
**Lemma 9**: _Let \(\ell\geq 3\), and let \(Q_{\mathcal{K}}\) be a downwards monotone quantifier of arity \(r<\ell\). Then \(Q_{\mathcal{K}}\in\mathbf{Q}_{\mathcal{N}_{\ell}}\)._
_Proof._ Let \(\tau\) be the vocabulary of \(\mathcal{K}\), and assume that \(\mathfrak{B}\in\mathcal{K}\) and \(\mathfrak{A}\leq n^{\ell}_{A}(\mathfrak{B})\cup\mathfrak{B}\). Then for all \(R\in\tau\) and \(\vec{a}=(a_{1},\ldots,a_{r})\in R^{\mathfrak{A}}\setminus R^{\mathfrak{B}}\) there are \(\vec{a}_{i}=(a_{i}^{1},\ldots,a_{i}^{r})\in R^{\mathfrak{B}}\), \(i\in[\ell]\), such that \(\widehat{n^{\ell}_{A}}(\vec{a}_{1},\ldots,\vec{a}_{\ell})=\vec{a}\). Thus, for each \(j\in[r]\) there is at most one \(i\in[\ell]\) such that \(a_{j}^{j}\neq a_{j}\), and hence there is at least one \(i\in[\ell]\) such that \(\vec{a}=\vec{a}_{i}\). This shows that \(\mathfrak{A}\leq\mathfrak{B}\), and since \(Q_{\mathcal{K}}\) is downwards monotone, we conclude that \(\mathfrak{A}\in\mathcal{K}\). \(\Box\)
Using a technique originally due to Imhof for (upwards) monotone quantifiers (see [16]), we can show that any quantifier \(Q_{\mathcal{K}}\) is definable by a downwards monotone quantifier of the same arity. Indeed, if the vocabulary of \(\mathcal{K}\) is \(\tau=\{R_{1},\ldots,R_{m}\}\), where \(\mathrm{ar}(R_{i})=r\) for all \(i\in[m]\), we let \(\tau^{\prime}:=\{S_{1},\ldots,S_{m}\}\) be a disjoint copy of \(\tau\), and \(\tau^{*}:=\tau\cup\tau^{\prime}\). Furthermore, we let \(\mathcal{K}^{*}\) be the class of all \(\tau^{*}\)-structures \(\mathfrak{A}\) such that \(R^{\mathfrak{A}}_{i}\cap S^{\mathfrak{A}}_{i}=\emptyset\) for all \(i\in[m]\), and \((A,R^{\mathfrak{A}}_{1},\ldots,R^{\mathfrak{A}}_{m})\in\mathcal{K}\) or \(R^{\mathfrak{A}}_{i}\cup S^{\mathfrak{A}}_{i}\neq A^{r}\) for some \(i\in[m]\). Then \(Q_{\mathcal{K}^{*}}\) is downwards monotone, and clearly \(Q_{\mathcal{K}}\vec{x}\,(\psi_{1},\ldots,\psi_{m})\) is equivalent with \(Q_{\mathcal{K}^{*}}\vec{x}\,(\psi_{1},\ldots,\psi_{m},-\psi_{1},\ldots,-\psi_{ m})\).
Using this observation, we get the following corollary to Lemmas 8 and 9.
**Corollary 10**: _(a) Let \(\mathcal{P}\) be as in Lemma 8. Then \(L^{k}_{\infty\omega}(\mathbf{Q}_{\mathcal{P}}\cup\mathbf{Q}_{1})\leq L^{k}_{ \infty\omega}(\mathbf{Q}_{\mathcal{P}})\)._
_(b) \(L^{k}_{\infty\omega}(\mathbf{Q}_{\mathcal{N}_{\ell}}\cup\mathbf{Q}_{\ell-1}) \leq L^{k}_{\infty\omega}(\mathbf{Q}_{\mathcal{N}_{\ell}})\)._
As explained in the beginning of this section, the definition of \(\mathcal{P}\)-closed quantifiers was inspired by the closure property of a CSP quantifier \(Q_{\mathsf{CSP}(\mathfrak{E})}\) that arises from a polymorphism of \(\mathfrak{C}\). Thus, it is natural to look for sufficient conditions on the family \(\mathcal{P}\) and the target structure \(\mathfrak{C}\) for \(Q_{\mathsf{CSP}(\mathfrak{E})}\) to be \(\mathcal{P}\)-closed. It turns out that the notions of projectivity and partial polymorphism lead to such a condition.
**Proposition 11**: _Let \(\mathcal{P}\) be a projective family of \(n\)-ary partial functions, and let \(\mathfrak{C}\) be a \(\tau\)-structure. If \(p_{C}\) is a partial polymorphism of \(\mathfrak{C}\), then \(Q_{\mathsf{CSP}(\mathfrak{E})}\in\mathbf{Q}_{\mathcal{P}}\)._
_Proof._ Assume that \(\mathfrak{B}\in\mathcal{K}\) and \(\mathfrak{A}\leq p_{A}(\mathfrak{B})\cup\mathfrak{B}\). Then \(A=B\) and there is a homomorphism \(h\colon\mathfrak{B}\to\mathfrak{C}\). We show that \(h\) is a homomorphism \(\mathfrak{A}\to\mathfrak{C}\), and hence \(\mathfrak{A}\in\mathcal{K}\). Thus let \(R\in\tau\), and let \(\vec{a}\in R^{\mathfrak{A}}\). If \(\vec{a}\in R^{\mathfrak{B}}\), then \(h(\vec{a})\in R^{\mathfrak{C}}\) by assumption. On the other hand, if \(\vec{a}\in R^{\mathfrak{A}}\setminus R^{\mathfrak{B}}\), then there exist tuples \(\vec{a}_{1},\ldots,\vec{a}_{n}\in R^{\mathfrak{B}}\) such that \(\vec{a}=\widehat{p_{A}}(\vec{a}_{1},\ldots,\vec{a}_{n})\). Since \(h\) is a homomorphism \(\mathfrak{B}\to\mathfrak{C}\), we have \(h(\vec{a}_{i})\in R^{\mathfrak{C}}\) for each \(i\in[n]\). Since \(p_{C}\) is a partial polymorphism of \(\mathfrak{C}\), we have \(\widehat{p_{C}}(h(\vec{a}_{1}),\ldots,h(\vec{a}_{n}))\in R^{\mathfrak{C}}\). Finally, since \(\mathcal{P}\) is projective, we have \(h(\vec{a})=h(\widehat{p_{A}}(\vec{a}_{1},\ldots,\vec{a}_{n}))=\widehat{p_{C}}( h(\vec{a}_{1}),\ldots,h(\vec{a}_{n}))\), and hence \(h(\vec{a})\in R^{\mathfrak{C}}\). \(\Box\)
We can now apply Proposition 11 to the families introduced in Example 2.
**Example 12**: _(a) Consider a constraint satisfaction problem \(\mathsf{CSP}(\mathfrak{C})\) such that \(\mathfrak{C}\) has a Maltsev polymorphism \(p\colon\mathfrak{C}^{3}\to\mathfrak{C}\). We show that \(Q_{\mathsf{CSP}(\mathfrak{E})}\in\mathbf{Q}_{\mathcal{M}}\). As pointed out in Example 2, \(M_{C}\) is a partial polymorphism of \(\mathfrak{C}\). Thus, by Proposition 11 it suffices to show that the Maltsev family \(\mathcal{M}\) is projective._
_Thus, assume that \(f\colon A\to B\) is a function, and \(M_{A}(a,b,c)\) is defined. Then \(a=b\) and \(M_{A}(a,b,c)=c\), or \(b=c\) and \(M_{A}(a,b,c)=a\). In the former case we have \(f(a)=f(b)\), whence \(M_{B}(f(a),f(b),f(c))=f(c)=f(M_{A}(a,b,c))\). In the latter case we have \(f(b)=f(c)\), whence \(M_{B}(f(a),f(b),f(c))=f(a)=f(M_{A}(a,b,c))\)._
_(b) The \(n\)-regular hypergraph \(m\)-colouring problem is \(\mathsf{CSP}(\mathfrak{H}_{n,m})\), where \(\mathfrak{H}_{n,m}=([m],R_{n,m})\) is the complete \(n\)-regular hypergraph with \(m\) vertices, i.e.,_
* \(R_{n,m}:=\{(v_{1},\ldots,v_{n})\in[m]^{n}\mid v_{i}\neq v_{j}\text{ for all }1\leq i<j\leq m\}\)_._
_We show that \(Q_{\mathsf{CSP}(\mathfrak{H}_{n,m})}\in\mathbf{Q}_{\mathcal{M}\mathcal{G}}\) for all \(n\geq 2\) and \(m\geq n\). By Proposition 11 it suffices to show that \(m_{[m]}\) is a partial polymorphism of \(\mathfrak{H}_{n,m}\), and the family \(\mathcal{M}\mathcal{G}\) is projective._
_To see that \(m_{[m]}\) is a partial polymorphism of \(\mathfrak{H}_{n,m}\), assume that \(\vec{a}_{i}=(a_{i}^{1},\ldots,a_{n}^{n})\in R_{n,m}\) for \(i\in[3]\), and \(\vec{a}=(a_{1},\ldots,a_{n})=m_{[m]}(\vec{a}_{1},\vec{a}_{2},\vec{a}_{3})\). By the definition of \(m_{[m]}\), for each \(j\in[n]\) we have \(|\{i\in[3]\mid a_{i}^{j}=a_{j}\}|\geq 2\). Thus for any two distinct \(j,k\in[n]\), there is \(i\in[3]\) such that \(a_{j}=a_{i}^{j}\) and \(a_{i}^{k}=a_{k}\), whence \(a_{j}\neq a_{k}\). Thus we have \(\vec{a}\in R_{n,m}\)._
_To show that \(\mathcal{M}\mathcal{G}\) is projective, assume that \(f\colon A\to B\) is a function, and \(m_{A}(a,b,c)\) is defined. Then \(a=b=m_{A}(a,b,c)\), \(a=c=m_{A}(a,b,c)\) or \(b=c=m_{A}(a,b,c)\). In the first case we have \(f(m_{A}(a,b,c))=f(a)=f(b)=m_{B}(f(a),f(b),f(c))\), as desired. The two other cases are similar._
_(c) In the same way we can show that the family \(\mathcal{N}_{\ell}\) of partial near-unanimity polymorphisms is projective for any \(\ell\geq 3\). We relax now the notion of hypergraph coloring as follows: Let \(\mathfrak{H}=(H,R)\), where \(R\subseteq H^{[n]}\), be a hypergraph and let \(k<n\). A \(k\)-weak \(m\)-coloring of \(\mathfrak{H}\) is a function \(f\colon H\to[m]\) such that for all \(e\in R\) and all \(i\in[m]\), \(|e\cap f^{-1}[\{i\}]|\leq k\). Observe now that there exists a \(k\)-weak \(m\)-coloring of \(\mathfrak{H}\) if and only if \(\mathfrak{H}\in\mathsf{CSP}(\mathfrak{H}_{n,m}^{k})\), where \(\mathfrak{H}_{n,m}^{k}=([m],R_{n,m}^{k})\) is the structure such that_
* \(R_{n,m}^{k}:=\{(v_{1},\ldots,v_{n})\in[m]^{n}\mid|\{v_{i}\mid i\in I\}|\geq 2 \text{ for all }I\subseteq[n]\text{ with }|I|=k+1\}\)_._
_Note that \(\mathfrak{H}_{n,m}^{1}=\mathfrak{H}_{n,m}\), whence \(m_{[m]}=n_{[m]}^{3}\) is a partial polymorphism of \(\mathfrak{H}_{n,m}^{1}\). It is straightforward to generalize this to \(\ell>3\): \(n_{[m]}^{\ell}\) is a partial polymorhism of \(\mathfrak{H}_{n,m}^{\ell-2}\). Thus by Proposition 11, the \(\mathrm{CSP}\) quantifier \(Q_{\mathsf{CSP}(\mathfrak{H}_{n,m}^{\ell-2})}\) is \(\mathbf{Q}_{\mathcal{N}_{\ell}}\)-closed._
**Remark 13**: _As shown in Example 12(b), the partial majority function \(m_{[m]}\) is a partial polymorphism of the structure \(\mathfrak{H}_{n,m}\). However, there does not exist any polymorphism \(p\colon[m]^{3}\to[m]\) that extends \(m_{[m]}\). This can be verified directly, but it also follows from the fact that \(\mathsf{CSP}(\mathfrak{C})\) is of bounded width for any \(\mathfrak{C}\) that has a majority polymorphism ([11]), but \(\mathsf{CSP}(\mathfrak{H}_{n,m})\) is not of bounded width. The same holds for the partial functions \(n_{[m]}^{\ell}\) and the structures \(\mathfrak{H}_{n,m}^{k}\) in Example 12(c)._
## 4 Pebble game for \(\mathcal{P}\)-closed quantifiers
In this section we introduce a pebble game that characterizes equivalence of structures with respect to \(L^{\omega}_{\infty\omega}(\mathbf{Q}_{\mathcal{P}})\), the extension of the infinitary \(k\)-variable logic \(L^{\omega}_{\infty\omega}\) by the class of all \(\mathcal{P}\)-closed quantifiers.
We fix a family \(\mathcal{P}\) of \(n\)-ary partial functions for the rest of the section. Given two structures \(\mathfrak{A}\) and \(\mathfrak{B}\) of the same vocabulary, and assignments \(\alpha\) and \(\beta\) on \(\mathfrak{A}\) and \(\mathfrak{B}\), respectively, such that \(\mathrm{dom}(\alpha)=\mathrm{dom}(\beta)\), we write \((\mathfrak{A},\alpha)\equiv_{\infty\omega,\mathcal{P}}^{k}(\mathfrak{B},\beta)\) if the equivalence
\[(\mathfrak{A},\alpha)\models\varphi\iff(\mathfrak{B},\beta)\models\varphi\]
holds for all formulas \(\varphi\in L^{k}_{\infty\omega}(\mathbf{Q}_{\mathcal{P}})\) with free variables in \(\mathrm{dom}(\alpha)\). If \(\alpha=\beta=\emptyset\), we write simply \(\mathfrak{A}\equiv_{\infty\omega,\mathcal{P}}^{k}\mathfrak{B}\) instead of \((\mathfrak{A},\emptyset)\equiv_{\infty\omega,\mathcal{P}}^{k}(\mathfrak{B},\emptyset)\).
The basic idea of our pebble game for a pair \((\mathfrak{A},\mathfrak{B})\) of structures is the following. In each round Duplicator gives a bijection \(f\colon A\to B\), just like in the bijection games of [14], but instead of using \(\vec{b}=f(\vec{a})\) as answer for Spoiler's move \(\vec{a}\in A^{r}\), she is allowed to give a sequence \(\vec{b}_{1},\ldots,\vec{b}_{n}\in B^{r}\) of alternative answers as long as \(\vec{b}=\widehat{p_{B}}(\vec{b}_{1},\ldots,\vec{b}_{n})\). Spoiler completes the round by choosing one of these alternatives \(\vec{b}_{i}\). Spoiler wins if \(\vec{a}\mapsto\vec{b}_{i}\) is not a partial isomorphism; otherwise the game carries on from the new position.
Observe now that if Duplicator has a winning strategy for the first round of the game, then \(f(\mathfrak{A})\leq p_{B}(\mathfrak{B})\cup\mathfrak{B}\). Indeed, if Spoiler chooses a tuple \(\vec{a}\in R^{\mathfrak{A}}\), then Duplicator has to answer by either the tuple \(f(\vec{a})\), or a sequence \(\vec{b}_{1},\ldots,\vec{b}_{n}\in B^{r}\) of tuples such that \(f(\vec{a})=\widehat{p_{B}}(\vec{b}_{1},\ldots,\vec{b}_{n})\); in
the first case she loses if \(f(\vec{a})\not\in R^{\mathfrak{B}}\), and in the second case she loses if \(\vec{b}_{i}\not\in R^{\mathfrak{B}}\) for some \(i\in[n]\). Thus if Duplicator has a winning strategy in the one round game and \(\mathfrak{B}\in\mathcal{K}\) for some \(\mathcal{P}\)-closed quantifier \(Q_{\mathcal{K}}\), then \(f(\mathfrak{A})\in\mathcal{K}\), and since \(f\) is an isomorphism \(\mathfrak{A}\to f(\mathfrak{A})\), also \(\mathfrak{A}\in\mathcal{K}\). In other words, if \(\mathfrak{B}\models Q_{\mathcal{K}}\vec{y}\left(R_{1}(\vec{y}),\ldots,R_{m}( \vec{y})\right)\), then \(\mathfrak{A}\models Q_{\mathcal{K}}\vec{y}\left(R_{1}(\vec{y}),\ldots,R_{m}( \vec{y})\right)\). The reverse implication is obtained by using the move described above with the structures switched.
By allowing only \(k\) variables and repeating rounds indefinitely (unless Spoiler wins at some round), we obtain a game such that Duplicator having a winning strategy implies \(\mathfrak{A}\equiv_{\infty\omega,\mathcal{P}}^{k}\mathfrak{B}\). However, in order to prove the converse implication we need to modify the rules explained above. This is because \(p_{B}(\mathfrak{B})\cup\mathfrak{B}\) is not necessarily closed with respect to the function \(p_{B}\), and in the argument above it would equally well suffice that \(f(\mathfrak{A})\leq\mathfrak{C}\) for some structure \(\mathfrak{C}\) that is obtained by applying \(p_{B}\) repeatedly to \(\mathfrak{B}\). In the next definition we formalize the idea of such repeated applications.
**Definition 14**: _Let \(p\colon A^{n}\to A\) be a partial function, and let \(R\subseteq A^{r}\). We define a sequence \(\Gamma^{i}_{p}(R)\), \(i\in\omega\), of \(r\)-ary relations on \(A\) by the following recursion:_
* \(\Gamma^{0}_{p}(R):=R\)_;_ \(\Gamma^{i+1}_{p}(R):=p(R)\cup\Gamma^{i}_{p}(R)\)_._
_Furthermore, we define \(\Gamma^{\omega}_{p}(R)=\bigcup_{i\in\omega}\Gamma^{i}_{p}(R)\)._
_This is generalized to \(\tau\)-structures in the natural way: for all \(i\in\omega\cup\{\omega\}\), \(\Gamma^{i}_{p}(\mathfrak{A})\) is the \(\tau\)-structure \(\mathfrak{C}\) such that \(C=A\) and \(R^{\mathfrak{C}}:=\Gamma^{i}_{p}(R^{\mathfrak{A}})\) for each \(R\in\tau\)._
Note that since \(\Gamma^{i}_{p}(R)\subseteq\Gamma^{i+1}_{p}(R)\) for all \(i\in\omega\) (assuming \(A\) is finite) there exists \(j\leq|A^{r}|\) such that \(\Gamma^{\omega}_{p}(R)=\Gamma^{j}_{p}(R)\). Similarly for any finite structure \(\mathfrak{A}\), \(\Gamma^{\omega}_{p}(\mathfrak{A})=\Gamma^{j}_{p}(\mathfrak{A})\) for some \(j\leq|A^{r}|\), where \(r\) is the maximum arity of relations in \(\mathfrak{A}\).
**Lemma 15**: _Let \(\mathcal{P}\) a family of \(n\)-ary partial functions. A quantifier is \(\mathcal{P}\)-closed if and only if the implication_
\[\mathfrak{B}\in\mathcal{K}\text{ and }\mathfrak{A}\leq\Gamma^{\omega}_{p_{A}}( \mathfrak{B})\implies\mathfrak{A}\in\mathcal{K}\]
_holds for all structures \(\mathfrak{A}\) and \(\mathfrak{B}\) with \(A=B\)._
_Proof._ Assume first that \(Q_{\mathcal{K}}\) is \(\mathcal{P}\)-closed, \(\mathfrak{B}\in\mathcal{K}\) and \(\mathfrak{A}\leq\Gamma^{\omega}_{p_{A}}(\mathfrak{B})\). We show first by induction on \(i\) that \(\Gamma^{i}_{p_{A}}(\mathfrak{B})\in\mathcal{K}\) for all \(i\in\omega\). For \(i=0\) this holds by assumption. If \(\Gamma^{i}_{p_{A}}(\mathfrak{B})\in\mathcal{K}\), then \(\Gamma^{i+1}_{p_{A}}(\mathfrak{B})=p_{A}(\mathfrak{C})\cup\mathfrak{C}\), for \(\mathfrak{C}=\Gamma^{i}_{p_{A}}(\mathfrak{B})\), and hence \(\Gamma^{i+1}_{p_{A}}\) follows from the assumption that \(Q_{\mathcal{K}}\) is \(\mathcal{P}\)-closed.
As noted above, there exists \(j\in\omega\) such that \(\Gamma^{\omega}_{p_{A}}(\mathfrak{B})=\Gamma^{j}_{p_{A}}(\mathfrak{B})\). Thus we have \(\mathfrak{A}\leq\Gamma^{j}_{p_{A}}(\mathfrak{B})\leq\Gamma^{j+1}_{p_{A}}( \mathfrak{B})=p_{A}(\Gamma^{j}_{p_{A}}(\mathfrak{B}))\cup\Gamma^{j}_{p_{A}}( \mathfrak{B})\). Since \(\Gamma^{j}_{p_{A}}(\mathfrak{B})\in\mathcal{K}\) and \(\mathcal{K}\) is \(\mathcal{P}\)-closed, it follows that \(\mathfrak{A}\in\mathcal{K}\).
Assume then that the implication
\[(*)\hskip 14.226378pt\mathfrak{B}\in\mathcal{K}\text{ and }\mathfrak{A}\leq \Gamma^{\omega}_{p_{A}}(\mathfrak{B})\implies\mathfrak{A}\in\mathcal{K}\]
holds for all \(\mathfrak{A}\) and \(\mathfrak{B}\) with \(A=B\). Assume further that \(\mathfrak{B}\in\mathcal{K}\) and \(\mathfrak{A}\leq p_{A}(\mathfrak{B})\cup\mathfrak{B}\). By definition \(p_{A}(\mathfrak{B})\cup\mathfrak{B}=\Gamma^{1}_{p_{A}}(\mathfrak{B})\), and since \(\Gamma^{1}_{p_{A}}(\mathfrak{B})\leq\Gamma^{\omega}_{p_{A}}(\mathfrak{B})\), we have \(\mathfrak{A}\leq\Gamma^{\omega}_{p_{A}}(\mathfrak{B})\). Thus \(\mathfrak{A}\in\mathcal{K}\) follows from the implication \((*)\). \(\Box\)
### Game for \(\mathcal{P}\)-closed quantifiers
We are now ready to give the formal definition of our pebble game for \(\mathcal{P}\)-closed quantifiers. Let \(k\) be a positive integer. Assume that \(\mathfrak{A}\) and \(\mathfrak{B}\) are \(\tau\)-structures for a relational vocabulary \(\tau\). Furthermore, assume that \(\alpha\) and \(\beta\) are assignments on \(\mathfrak{A}\) and \(\mathfrak{B}\), respectively, such that \(\operatorname{dom}(\alpha)=\operatorname{dom}(\beta)\subseteq X\), where \(X=\{x_{1},\ldots,x_{k}\}\). The \(k\)_-pebble \(\mathcal{P}\) game_ for \((\mathfrak{A},\alpha)\) and \((\mathfrak{B},\beta)\) is played between _Spoiler_ and _Duplicator_. We denote the game by \(\operatorname{PG}^{\mathcal{P}}_{k}(\mathfrak{A},\mathfrak{B},\alpha,\beta)\), and we use the shorthand notation \(\operatorname{PG}^{\mathcal{P}}_{k}(\alpha,\beta)\) whenever \(\mathfrak{A}\) and \(\mathfrak{B}\) are clear from the context.
**Definition 16**: _The rules of the game \(\mathrm{PG}^{\mathcal{P}}_{k}(\mathfrak{A},\mathfrak{B},\alpha,\beta)\) are the following:_
1. _If_ \(\alpha\mapsto\beta\notin\mathrm{PI}(\mathfrak{A},\mathfrak{B})\)_, then the game ends, and Spoiler wins._
2. _If (_1_) does not hold, there are two types of moves that Spoiler can choose to play:_ * **Left \(\mathcal{P}\)-quantifier move:** _Spoiler starts by choosing_ \(r\in[k]\) _and an_ \(r\)_-tuple_ \(\vec{y}\in X^{r}\) _of distinct variables. Duplicator responds with a bijection_ \(f\colon B\to A\)_. Spoiler answers by choosing an_ \(r\)_-tuple_ \(\vec{b}\in B^{r}\)_. Duplicator answers by choosing_ \(P\subseteq A^{r}\) _such that_ \(f(\vec{b})\in\Gamma^{\omega}_{p_{A}}(P)\)_. Spoiler completes the round by choosing_ \(\vec{a}\in P\)_. The players continue by playing_ \(\mathrm{PG}^{\mathcal{P}}_{k}(\alpha^{\prime},\beta^{\prime})\)_, where_ \(\alpha^{\prime}:=\alpha[\vec{a}/\vec{y}]\) _and_ \(\beta^{\prime}:=\beta[\vec{b}/\vec{y}]\)_._ * **Right \(\mathcal{P}\)-quantifier move:** _Spoiler starts by choosing_ \(r\in[k]\) _and an_ \(r\)_-tuple_ \(\vec{y}\in X^{r}\) _of distinct variables. Duplicator chooses next a bijection_ \(f\colon A\to B\)_. Spoiler answers by choosing an_ \(r\)_-tuple_ \(\vec{a}\in A^{r}\)_. Duplicator answers by choosing_ \(P\subseteq B^{r}\) _such that_ \(f(\vec{a})\in\Gamma^{\omega}_{p_{B}}(P)\)_. Spoiler completes the round by choosing_ \(\vec{b}\in P\)_. The players continue by playing_ \(\mathrm{PG}^{\mathcal{P}}_{k}(\alpha^{\prime},\beta^{\prime})\)_, where_ \(\alpha^{\prime}:=\alpha[\vec{a}/\vec{y}]\) _and_ \(\beta^{\prime}:=\beta[\vec{b}/\vec{y}]\)_._
3. _Duplicator wins the game if Spoiler does not win it in a finite number of rounds._
We now prove that the game \(\mathrm{PG}^{\mathcal{P}}_{k}\) indeed characterizes equivalence of structures with respect to the infinitary \(k\)-variable logic with all \(\mathcal{P}\)-closed quantifiers.
**Theorem 17**: _Let \(\mathcal{P}\) be an invariant family of partial functions. Then Duplicator has a winning strategy in \(\mathrm{PG}^{\mathcal{P}}_{k}(\mathfrak{A},\mathfrak{B},\alpha,\beta)\) if, and only if, \((\mathfrak{A},\alpha)\equiv^{k}_{\infty\omega,\mathcal{P}}(\mathfrak{B},\beta)\)._
_Proof._\(\Rightarrow\): We prove by induction on \(\varphi\in L^{k}_{\infty\omega}(\mathbf{Q}_{\mathcal{P}})\) that (for any assignments \(\alpha\) and \(\beta\)) if Duplicator has a winning strategy in \(\mathrm{PG}^{\mathcal{P}}_{k}(\alpha,\beta)\), then \((\mathfrak{A},\alpha)\models\varphi\iff(\mathfrak{B},\beta)\models\varphi\).
* If \(\varphi\) is an atomic formula, the claim follows from the fact that Spoiler always wins the game \(\mathrm{PG}^{\mathcal{P}}_{k}(\alpha,\beta)\) immediately if \(\alpha\mapsto\beta\notin\mathrm{PI}(\mathfrak{A},\mathfrak{B})\).
* The cases \(\varphi=\neg\psi\), \(\varphi=\bigvee\Psi\) and \(\varphi=\bigwedge\Psi\) are straightforward.
* By Proposition 5, the negation of the existential quantifier is in \(\mathbf{Q}_{\mathcal{P}}\), and hence we do not need to consider the case \(\varphi=\exists x_{i}\psi\) separately.
* Consider then the case \(\varphi=Q_{\times}\vec{y}\,\vec{y}\) for some \(r\)-ary quantifier \(Q_{\times}\in\mathbf{Q}_{\mathcal{P}}\) and interpretation \(\mathcal{I}=(\psi_{1},\ldots,\psi_{\ell})\). We start by assuming that \((\mathfrak{A},\alpha)\models\varphi\). Thus, \(\mathcal{I}(\mathfrak{A},\alpha):=(A,R_{1},\ldots,R_{\ell})\in\mathcal{K}\). Let Spoiler play in the game \(\mathrm{PG}^{\mathcal{P}}_{k}(\alpha,\beta)\) a left \(\mathcal{P}\)-quantifier move with \(r\) and the tuple \(\vec{y}\in X^{r}\), and let \(f\colon B\to A\) be the bijection given by the winning strategy of Duplicator. Let \(\mathcal{I}(\mathfrak{B},\beta):=(B,R^{\prime}_{1},\ldots,R^{\prime}_{\ell})\), and for each \(i\in[\ell]\), let \(S_{i}:=f(R^{\prime}_{i})\). We claim that \(\mathfrak{I}:=(A,S_{1},\ldots,S_{\ell})\in\mathcal{K}\). Since \(f\) is an isomorphism \(\mathcal{I}(\mathfrak{B},\beta)\rightarrow\mathfrak{D}\), it follows then that \((\mathfrak{B},\beta)\models\varphi\). To prove the claim it suffices to show that \(\mathfrak{D}\leq\Gamma^{\omega}_{p_{A}}(\mathcal{I}(\mathfrak{A},\alpha))\), since then \(\mathfrak{D}\in\mathcal{K}\) by Lemma 15 and the assumption that \(Q_{\times}\) is \(\mathcal{P}\)-closed. To show this, let \(i\in[\ell]\) and \(\vec{a}\in S_{i}\). We let Spoiler choose the tuple \(\vec{b}=f^{-1}(\vec{a})\) as his answer to the bijection \(f\). Thus, \((\mathfrak{B},\beta[\vec{b}/\vec{y}])\models\psi_{i}\). Let \(P\subseteq A^{r}\) be the answer of Duplicator. Then by the rules of the game \(\vec{a}\in\Gamma^{\omega}_{p_{A}}(P)\), and Duplicator has a winning strategy in the game \(\mathrm{PG}^{\mathcal{P}}_{k}(\alpha[\vec{a}/\vec{y}],\beta[\vec{b}/\vec{y}])\) for all \(\vec{a}\in P\). Hence by induction hypothesis \((\mathfrak{A},\alpha[\vec{a}/\vec{y}])\models\psi_{i}\), i.e., \(\vec{a}\in R_{i}\), holds for all \(\vec{a}\in P\). This shows that \(S_{i}\subseteq\Gamma^{\omega}_{p_{A}}(R_{i})\), and since this holds for all \(i\in[\ell]\), we see that \(\mathfrak{D}\leq\Gamma^{\omega}_{p_{A}}(\mathcal{I}(\mathfrak{A},\alpha))\). By using the right \(\mathcal{P}\)-quantifier move in place of the left quantifier move, we can prove that \((\mathfrak{B},\beta)\models\varphi\) implies \((\mathfrak{A},\alpha)\models\varphi\). Thus, \((\mathfrak{A},\alpha)\models\varphi\iff(\mathfrak{B},\beta)\models\varphi\), as desired.
\(\Leftarrow\): Assume then that \((\mathfrak{A},\alpha)\equiv^{k}_{\infty,\omega,\mathcal{P}}(\mathfrak{B},\beta)\). Clearly it suffices to show that Duplicator can play in the first round of the game \(\mathrm{PG}^{\mathcal{P}}_{k}(\alpha,\beta)\) in such a way that \((\mathfrak{A},\alpha^{\prime})\equiv^{k}_{\infty\omega,\mathcal{P}}(\mathfrak{B },\beta^{\prime})\) holds, where \(\alpha^{\prime}\) and \(\beta^{\prime}\) are the assignments arising from the choices of Spoiler and Duplicator.
Assume first that Spoiler decides to play a left \(\mathcal{P}\)-quantifier move in the first round of \(\mathrm{PG}^{\mathcal{P}}_{k}(\alpha,\beta)\). Let \(\vec{y}\in X^{r}\) be the tuple of variables he chooses. Since \(A\) and \(B\) are finite, for each \(\vec{a}\in A^{\tau}\) there is a formula \(\Psi_{\vec{a}}\in L^{k}_{\infty\omega}(\mathcal{Q}_{\mathcal{P}})\) such that for any \(\tau\)-structure \(\mathfrak{C}\) of size at most \(\max\{|A|,|B|\}\), any assignment \(\gamma\) on \(\mathfrak{C}\), and any tuple \(\vec{c}\in C^{r}\) we have
* \((\mathfrak{A},\alpha[\vec{a}/\vec{y}])\equiv^{k}_{\infty\omega,\mathcal{P}}( \mathfrak{C},\gamma[\vec{c}/\vec{y}])\) if and only if \((\mathfrak{C},\gamma[\vec{c}/\vec{y}])\models\Psi_{\vec{a}}\).
Let \(\vec{c}_{1},\ldots,\vec{c}_{\ell}\) be a fixed enumeration of the set \(A^{r}\), and let \(\mathfrak{J}\) be the interpretation \((\Psi_{1},\ldots,\Psi_{m})\), where \(\Psi_{j}:=\Psi_{\vec{c}_{j}}\) for each \(j\in[m]\). We define \(\mathcal{K}\) to be the closure of the class \(\{\mathfrak{D}\mid\mathfrak{D}\subseteq\Gamma^{\omega}_{p_{A}}(\mathfrak{J}( \mathfrak{A},\alpha))\}\) under isomorphisms. Note that if \(\mathfrak{D}\leq\Gamma^{\omega}_{p_{A}}(\mathfrak{J}(\mathfrak{A},\alpha))\) and \(\mathfrak{C}\leq\Gamma^{\omega}_{p_{A}}(\mathfrak{D})\), then clearly \(\mathfrak{C}\leq\Gamma^{\omega}_{p_{A}}(\mathfrak{J}(\mathfrak{A},\alpha))\). Hence by Lemma 15, the quantifier \(\mathcal{Q}_{\mathcal{K}}\) is \(\mathcal{P}\)-closed. Moreover, since \(\mathfrak{J}(\mathfrak{A},\alpha)\in\mathcal{K}\), we have \((\mathfrak{A},\alpha)\models Q_{\mathcal{K}}\vec{y}\mathfrak{J}\), and consequently by our assumption, \((\mathfrak{B},\beta)\models Q_{\mathcal{K}}\vec{y}\mathfrak{J}\). Thus, there is a structure \(\mathfrak{D}\leq\Gamma^{\omega}_{p_{A}}(\mathfrak{J}(\mathfrak{A},\alpha))\) and an isomorphism \(f\colon\mathfrak{J}(\mathfrak{B},\beta)\rightarrow\mathfrak{D}\). We let Duplicator to use the bijection \(f\colon B\to A\) as her answer to the choice \(\vec{y}\) of Spoiler.
Let \(\vec{b}\in B^{r}\) be the answer of Spoiler to \(f\), and let \(\vec{a}=f(\vec{b})\). Clearly \((\mathfrak{A},\alpha)\models\forall\vec{y}\bigvee_{j\in[\ell]}\Psi_{j}\), whence there exists \(j\in[\ell]\) such that \((\mathfrak{B},\beta[\vec{b}/\vec{y}])\models\Psi_{j}\), or in other words, \(\vec{b}\in R^{J(\mathfrak{B},\beta)}_{j}\). Since \(f\) is an isomorphism \(\mathfrak{J}(\mathfrak{B},\beta)\rightarrow\mathfrak{D}\), we have \(\vec{a}\in R^{\mathfrak{D}}_{j}\). We let Duplicator to use \(P:=R^{\mathfrak{J}(\mathfrak{A},\alpha)}_{j}\) as her answer to the choice \(\vec{b}\) of Spoiler; this is a legal move since \(\mathfrak{D}\leq\Gamma^{\omega}_{p_{A}}(\mathfrak{A},\alpha)\). Observe now that since \(P=R^{J(\mathfrak{A},\alpha)}_{j}\), we have \((\mathfrak{A},\alpha[\vec{a}/\vec{y}])\models\Psi_{\vec{c}_{j}}\), and consequently \((\mathfrak{A},\alpha[\vec{c}_{j}/\vec{y}])\equiv^{k}_{\infty\omega,\mathcal{P} }(\mathfrak{A},\alpha[\vec{a}/\vec{y}])\), for all \(\vec{a}\in P\). On the other hand we also have \((\mathfrak{B},\beta[\vec{b}/\vec{y}])\models\Psi_{\vec{c}_{j}}\), and hence \((\mathfrak{A},\alpha[\vec{c}_{j}/\vec{y}])\equiv^{k}_{\infty\omega,\mathcal{P} }(\mathfrak{B},\beta[\vec{b}/\vec{y}])\). Thus the condition \((\mathfrak{A},\alpha^{\prime})\equiv^{k}_{\infty\omega,\mathcal{P}}( \mathfrak{B},\beta^{\prime})\), where \(\alpha^{\prime}=\alpha[\vec{a}/\vec{y}]\) and \(\beta^{\prime}=\beta[\vec{b}/\vec{y}]\), holds after the first round of \(\mathrm{PG}^{\mathcal{P}}_{k}(\alpha,\beta)\) irrespective of the choice \(\vec{a}\in P\) of Spoiler in the end of the round.
The case where Spoiler starts with a right \(\mathcal{P}\)-quantifier move is handled in the same way by switching the roles of \((\mathfrak{A},\alpha)\) and \((\mathfrak{B},\beta)\). \(\Box\)
## 5 Playing the game
In this section we use the game \(\mathrm{PG}^{k}_{\mathcal{P}}\) to show inexpressibility of a property of finite structures in the infinitary finite variable logic \(L^{\omega}_{\infty\omega}\) augmented by all \(\mathcal{N}_{\ell}\)-closed quantifiers. More precisely, we prove that the Boolean constraint satisfaction problem \(\mathsf{CSP}(\mathfrak{C}_{\ell})\), where \(\mathfrak{C}_{\ell}\) is the structure with \(C=\{0,1\}\) and two \(\ell\)-ary relations \(R_{0}=\{(b_{1},\ldots,b_{\ell})\mid\sum_{i\in[\ell]}b_{i}\equiv 0\pmod{2}\}\) and \(R_{1}=\{(b_{1},\ldots,b_{\ell})\mid\sum_{i\in[\ell]}b_{i}\equiv 1\pmod{2}\}\), is not definable in \(L^{\omega}_{\infty\omega}(\mathbf{Q}_{\mathcal{N}_{\ell}})\).
### CFI construction
In the proof of the undefinability of \(\mathsf{CSP}(\mathfrak{C}_{\ell})\) we use a variation of the well-known CFI construction, due to Cai, Furer and Immerman [4]. Our construction is a minor modification of the one that was used in [14] for producing non-isomorphic structures on which Duplicator wins the \(k,n\)-bijection game. We start by explaining the details of the construction.
Let \(G=(V,E,\leq^{G})\) be a connected \(\ell\)-regular ordered graph. For each vertex \(v\in V\), we use the notation \(E(v)\) for the set of edges adjacent to \(v\) and \(\vec{c}(v)=(e_{1},\ldots,e_{\ell})\) for the tuple that lists \(E(v)\) in the order \(\leq^{G}\). The CFI structures we use have in the universe two elements \((e,1)\) and \((e,2)\) for each \(e\in E\), and two \(\ell\)-ary relations that connect such pairs \((e,i)\) for edges \(e\) that are adjacent to some vertex \(v\in V\).
**Definition 18**: _Let \(G=(V,E,\leq^{G})\) be a connected \(\ell\)-regular ordered graph and let \(U\subseteq V\). We define a CFI structure \(\mathfrak{A}_{\ell}(G,U)=(A_{\ell}(G),R^{\mathfrak{A}_{\ell}(G,U)}_{0},R^{ \mathfrak{A}_{\ell}(G,U)}_{1})\), where \(\mathrm{ar}(R_{0})=\mathrm{ar}(R_{1})=\ell\), as follows._
* \(A_{\ell}(G):=E\times[2]\)_,_
* \(R_{0}^{\mathfrak{A}_{\ell}(G,U)}:=\bigcup_{v\in V\setminus U}R(v)\cup\bigcup_{v \in U}\tilde{R}(v)\) _and_ \(R_{1}^{\mathfrak{A}_{\ell}(G,U)}:=\bigcup_{v\in U}R(v)\cup\bigcup_{v\in V \setminus U}\tilde{R}(v)\)_, where_
* \(R(v):=\{((e_{1},i_{1}),\ldots,(e_{\ell},i_{\ell}))\mid(e_{1},\ldots,e_{\ell})= \vec{e}(v),\,\sum_{j\in[\ell]}i_{j}=0\pmod{2}\}\)_, and_
* \(\tilde{R}(v):=\{((e_{1},i_{1}),\ldots,(e_{\ell},i_{\ell}))\mid(e_{1},\ldots,e_{ \ell})=\vec{e}(v),\,\sum_{j\in[\ell]}i_{j}=1\pmod{2}\}\)_._
For each \(v\in V\), we denote the set \(E(v)\times[2]\) by \(A(v)\). Furthermore, we define \(\mathfrak{A}_{\ell}(v):=(A(v),R(v),\tilde{R}(v))\) and \(\tilde{\mathfrak{A}}_{\ell}(v):=(A(v),\tilde{R}(v),R(v))\).
By a similar argument as in the CFI structures constructed in [14] and [15] it can be proved that \(\mathfrak{A}_{\ell}(G,U)\) and \(\mathfrak{A}_{\ell}(G,U^{\prime})\) are isomorphic if and only if \(|U|\) and \(|U^{\prime}|\) are of the same parity. We choose \(\mathfrak{A}_{\ell}^{\rm ev}(G):=\mathfrak{A}_{\ell}(G,\emptyset)\) and \(\mathfrak{A}_{\ell}^{\rm od}(G):=\mathfrak{A}_{\ell}(G,\{v_{0}\})\) as representatives of these two isomorphism classes, where \(v_{0}\) is the least element of \(V\) with respect to the linear order \(\leq^{G}\). We show first that these structures are separated by \({\sf CSP}(\mathfrak{C}_{\ell})\).
**Lemma 19**: \(\mathfrak{A}_{\ell}^{\rm ev}(G)\in{\sf CSP}(\mathfrak{C}_{\ell})\)_, but \(\mathfrak{A}_{\ell}^{\rm od}(G)\not\in{\sf CSP}(\mathfrak{C}_{\ell})\)._
_Proof._ Let \(h\colon A_{\ell}(G)\to\{0,1\}\) be the function such that \(h((e,1))=1\) and \(h((e,2))=0\) for all \(e\in E\). Then for any tuple \(((e_{1},i_{1}),\ldots,(e_{\ell},i_{\ell}))\) the parity of \(\sum_{j\in[\ell]}h((e_{j},i_{j}))\) is the same as the parity of \(\sum_{j\in[\ell]}i_{j}\). Thus, \(h\) is a homomorphism \(\mathfrak{A}_{\ell}^{\rm ev}(G)\to\mathfrak{C}_{\ell}\).
To show that \(\mathfrak{A}_{\ell}^{\rm od}(G)\not\in{\sf CSP}(\mathfrak{C}_{\ell})\), assume towards contradiction that \(g\colon A_{\ell}(G)\to\{0,1\}\) is a homomorphism \(\mathfrak{A}_{\ell}^{\rm od}(G)\to\mathfrak{C}_{\ell}\). Then for every \(e\in E\) necessarily \(g((e,1))\neq g((e,2))\). Furthermore, for every \(v\in V\setminus\{v_{0}\}\), the number \(n_{v}:=|\{e\in E(v)\mid g((e,2))=1\}|\) must be even, while the number \(n_{v_{0}}\) must be odd. Thus, \(\sum_{v\in V}n_{v}\) must be odd. However, this is impossible, since clearly \(\sum_{v\in V}n_{v}=2|\{e\in E\mid g((e,2))=1\}|\). \(\Box\)
### Good bijections
Our aim is to prove, for a suitable graph \(G\), that Duplicator has a winning strategy in \({\rm PG}_{k}^{\mathfrak{A}_{\ell}}(\mathfrak{A}_{\ell}^{\rm ev}(G),\mathfrak{ A}_{\ell}^{\rm od}(G),\emptyset,\emptyset)\). For the winning strategy, Duplicator needs a collection of well-behaved bijections. We define such a collection GB in Definition 23 below. One requirement is that the bijections preserve the first component of the elements \((e,i)\in A_{\ell}(G)\).
**Definition 20**: _A bijection \(f\colon A_{\ell}(G)\to A_{\ell}(G)\) is edge preserving if for every \(e\in E\) and \(i\in[2]\), \(f((e,i))\) is either \((e,1)\) or \((e,2)\)._
For any edge preserving \(f\) and any \(v\in V\) we denote by \(f_{v}\) the restriction of \(f\) to the set \(E(v)\times[2]\). The _switching number_\({\rm swn}(f_{v})\) of \(f_{v}\) is \(|\{e\in E(v)\mid f_{v}((e,1))=(e,2)\}|\). The lemma below follows directly from the definitions of \(\mathfrak{A}_{\ell}(v)\) and \(\tilde{\mathfrak{A}}_{\ell}(v)\).
**Lemma 21**: _Let \(f\colon A_{\ell}(G)\to A_{\ell}(G)\) be an edge preserving bijection, and let \(v\in V\)._
_(a) If \({\rm swn}(f_{v})\) is even, then \(f_{v}\) is an automorphism of the structures \(\mathfrak{A}_{\ell}(v)\) and \(\tilde{\mathfrak{A}}_{\ell}(v)\)._
_(b) If \({\rm swn}(f_{v})\) is odd, then \(f_{v}\) is an isomorphism between the structures \(\mathfrak{A}_{\ell}(v)\) and \(\tilde{\mathfrak{A}}_{\ell}(v)\)._
Given an edge preserving bijection \(f\colon A_{\ell}(G)\to A_{\ell}(G)\) we denote by \({\rm Odd}(f)\) the set of all \(v\in V\) such that \({\rm swn}(f_{v})\) is odd. Observe that \(|{\rm Odd}(f)|\) is necessarily even.
**Corollary 22**: _An edge preserving bijection \(f\colon A_{\ell}(G)\to A_{\ell}(G)\) is an automorphism of the structures \(\mathfrak{A}_{\ell}^{\rm ev}(G)\) and \(\mathfrak{A}_{\ell}^{\rm od}(G)\) if and only if \({\rm Odd}(f)=\emptyset\)._
_Proof._ If \({\rm Odd}(f)=\emptyset\), then by Lemma 21(a) \(f_{v}\) is an automorphism of \(\mathfrak{A}_{\ell}(v)\) and \(\tilde{\mathfrak{A}}_{\ell}(v)\) for all \(v\in V\). Clearly this means that \(f\) is an automorphism of \(\mathfrak{A}_{\ell}^{\rm ev}(G)\) and \(\mathfrak{A}_{\ell}^{\rm od}(G)\). On the other hand, if \(v\in{\rm Odd}(f)\), then by Lemma 21(b), for any tuple \(\vec{a}\in A(v)^{\ell}\), we have \(\vec{a}\in R(v)\iff f(\vec{a})\in\tilde{R}(v)\). Since \(R(v)\cap\tilde{R}(v)=\emptyset\), it follows that \(f\) is not an automorphism of \(\mathfrak{A}_{\ell}^{\rm ev}(G)\) and \(\mathfrak{A}_{\ell}^{\rm od}(G)\). \(\Box\)
**Definition 23**: _Let \(f\colon A_{\ell}(G)\to A_{\ell}(G)\) be edge preserving bijection. Then \(f\) is a good bijection if either \(\mathrm{Odd}(f)=\emptyset\) or \(\mathrm{Odd}(f)=\{v_{0},v\}\) for some \(v\in V\setminus\{v_{0}\}\). We denote the set of all good bijections by \(\mathrm{GB}\)._
Note that if \(f\colon A_{\ell}(G)\to A_{\ell}(G)\) is a good bijection, then there is exactly one vertex \(v^{*}\in V\) such that \(f_{v^{*}}\) is not a partial isomorphism \(\mathfrak{A}_{\ell}^{\mathrm{ev}}(G)\to\mathfrak{A}_{\ell}^{\mathrm{od}}(G)\). In case \(\mathrm{Odd}(f)=\emptyset\), \(v^{*}=v_{0}\), while in case \(\mathrm{Odd}(f)=\{v_{0},v\}\) for some \(v\neq v_{0}\), \(v^{*}=v\). We denote this vertex \(v^{*}\) by \(\mathrm{tw}(f)\) (the _twist_ of \(f\)).
Assume now that Duplicator has played a good bijection \(f\) in the game \(\mathrm{PG}_{k}^{\mathcal{N}_{\ell}}\) on the structures \(\mathfrak{A}_{\ell}^{\mathrm{ev}}(G)\) and \(\mathfrak{A}_{\ell}^{\mathrm{od}}(G)\). Then it is sure that Spoiler does not win the game in the next position \((\alpha,\beta)\) if \((e,1)\) and \((e,2)\) are not in the range of \(\alpha\) (and \(\beta\)) for any \(e\in E(\mathrm{tw}(f))\). This leads us to the following notion.
**Definition 24**: _Let \(f\) be a good bijection, and let \(F\subseteq E\). Then \(f\) if good for \(F\) if \(E(\mathrm{tw}(f))\cap F=\emptyset\). We denote the set of all bijections that are good for \(F\) by \(\mathrm{GB}(F)\)._
**Lemma 25**: _If \(f\in\mathrm{GB}(F)\), then \(f\upharpoonright(F\times[2])\) is a partial isomorphism \(\mathfrak{A}_{\ell}^{\mathrm{ev}}(G)\to\mathfrak{A}_{\ell}^{\mathrm{od}}(G)\)._
_Proof._ Clearly \(f\upharpoonright(F\times[2])\subseteq\bigcup_{v\in V\setminus\{\mathrm{tw}(f) \}}f_{v}\). By Lemma 21, \(f_{v}\) is an automorphism of \(\mathfrak{A}_{\ell}(v)\) for any \(v\in V\setminus\{\mathrm{tw}(f),v_{0}\}\), and if \(v_{0}\neq\mathrm{tw}(f)\), \(f_{v_{0}}\) is an isomorphism \(\mathfrak{A}_{\ell}(v)\to\tilde{\mathfrak{A}}_{\ell}(v)\). The claim follows from this. \(\Box\)
Given a good bijection \(f\) with \(\mathrm{tw}(f)=u\) and an \(E\)-path \(P=(u_{0},\ldots,u_{m})\) from \(u=u_{0}\) to \(u^{\prime}=u_{m}\), we obtain a new edge preserving bijection \(f_{P}\) by switching \(f\) on the edges \(e_{i}:=\{u_{i},u_{i+1}\}\), \(i<m\), of \(P\): \(f_{P}((e_{i},j))=(e_{i},3-j)\) for \(i<m\), and \(f_{P}(c)=f(c)\) for other \(c\in A_{\ell}(G)\). Clearly \(f_{P}\) is also a good bijection, and \(\mathrm{tw}(f_{P})=u^{\prime}\).
### Cops and Robber game
In order to prove that Duplicator has a winning strategy in \(\mathrm{PG}_{k}^{\mathcal{N}_{\ell}}(\mathfrak{A}_{\ell}^{\mathrm{ev}}(G), \mathfrak{A}_{\ell}^{\mathrm{od}}(G),\emptyset,\emptyset)\) we need to assume that the graph \(G\) has a certain largeness property with respect the number \(k\). We formulate this largeness property in terms of a game, \(\mathrm{CR}_{k}^{\ell}(G)\), that is a new variation of the _Cops&Robber games_ used for similar purposes in [14] and [15].
**Definition 26**: _The game \(\mathrm{CR}_{k}^{\ell}(G)\) is played between two players, Cop and Robber. The positions of the game are pairs \((F,u)\), where \(F\subseteq E\), \(|F|\leq k\), and \(u\in V\). The rules of the game are the following:_
* _Assume that the position is_ \((F,u)\)_._
* _If_ \(E(u)\cap F\neq\emptyset\)_, the game ends and Cop wins._
* _Otherwise Cop chooses a set_ \(F^{\prime}\subseteq E\) _such that_ \(|F^{\prime}|\leq k\)_. Then Robber answers by giving mutually disjoint_ \(E\setminus(F\cap F^{\prime})\)_-paths_ \(P_{i}=(u,u_{1}^{i},\ldots,u_{n_{i}}^{i})\)_,_ \(i\in[\ell]\)_, from_ \(u\) _to vertices_ \(u_{i}:=u_{n_{i}}^{i}\)_; here mutual disjointness means that_ \(P_{i}\) _and_ \(P_{i^{\prime}}\) _do not share edges for_ \(i\neq i^{\prime}\) _(i.e.,_ \(u_{1}^{i}\neq u_{1}^{i^{\prime}}\) _and_ \(\{u_{j}^{i},u_{j+1}^{i}\}\neq\{u_{j^{\prime}}^{i},u_{j^{\prime}+1}^{i}\}\) _for all_ \(j\) _and_ \(j^{\prime}\)_). Then Cop completes the round by choosing_ \(i\in[\ell]\)_. The next position is_ \((F^{\prime},u_{i})\)_._
The intuition of the game \(\mathrm{CR}_{k}^{\ell}(G)\) is that Cop has \(k\) pebbles that he plays on edges of \(G\) forming a set \(F\subseteq E\); these pebbles mark the edges \(e\) such that \((e,1)\) or \((e,2)\) is in the range of \(\alpha\) or \(\beta\) in a position \((\alpha,\beta)\) of the game \(\mathrm{PG}_{k}^{\mathcal{N}_{\ell}}\) on \(\mathfrak{A}_{\ell}^{\mathrm{ev}}(G)\) and \(\mathfrak{A}_{\ell}^{\mathrm{od}}(G)\). Robber has one pebble that she plays on the vertices of \(G\); this pebble marks the vertex \(\mathrm{tw}(f)\), where \(f\) is the good bijection played by Duplicator in the previous round of \(\mathrm{PG}_{k}^{\mathcal{N}_{\ell}}\).
Cop captures Robber and wins the game if after some round (at least) one of his pebbles is on an edge that is adjacent to the vertex containing Robber's pebble. This corresponds to a position \((\alpha,\beta)\) in the game \(\mathrm{PG}_{k}^{\mathcal{N}_{\ell}}\) such that \(\alpha\mapsto\beta\) is potentially not a partial isomorphism. Otherwise Lemma 25
guarantees that \(\alpha\mapsto\beta\) is a partial isomorphism. Cop can then move any number of his pebbles to new positions on \(G\). While the pebbles Cop decides to move are still on their way to their new positions, Robber is allowed to prepare \(\ell\) mutually disjoint escape routes along edges of \(G\) that do not contain any stationary pebbles of Cop. We show in the proof of Theorem 29 that these escape routes generate tuples \(\vec{a}_{1},\ldots,\vec{a}_{\ell}\) such that \(f(\vec{b})=\hat{q}(\vec{a}_{1},\ldots,\vec{a}_{\ell})\), where \(q=n^{\ell}_{A_{\ell}(G)}\) and \(\vec{b}\) is the tuple chosen by Spoiler after Duplicator played \(f\). This gives Duplicator a legal answer \(P=\{a_{1},\ldots,\vec{a}_{\ell}\}\) to \(\vec{b}\). Then Spoiler completes the round by choosing one of the tuples in \(P\). Correspondingly, in the end of the round of \(\mathrm{CR}^{\ell}_{k}(G)\) Cop chooses which escape route Robber has to use by blocking all but one of them.
**Definition 27**: _Assume that \(u\in V\) and \(F\subseteq E\) is a set of edges such that \(|F|\leq k\). We say that \(u\) is \(k\)-safe for \(F\) if Robber has a winning strategy in the game \(\mathrm{CR}^{\ell}_{k}(G)\) starting from position \((F,u)\)._
We prove next the existence of graphs \(G\) such that Robber has a winning strategy in the game \(\mathrm{CR}^{\ell}_{k}(G)\).
**Theorem 28**: _For every \(\ell\geq 3\) and every \(k\geq 1\), there is an \(\ell\)-regular graph \(G=(V,E)\) such that every vertex \(v\in V\) is \(k\)-safe for \(\emptyset\)._
_Proof._ Clearly if Robber has a winning strategy in \(\mathrm{CR}^{\ell}_{k}(G)\), it also has a winning strategy in \(\mathrm{CR}^{\ell}_{k^{\prime}}(G)\) for \(k^{\prime}<k\). Thus, it suffices to prove the theorem for \(k\geq\ell\).
By a well-known result of Erdos and Sachs [10], there exist \(\ell\)-regular connected graphs of _girth_\(g\) for arbitrarily large \(g\). Choose a positive integer \(d\) with \(d>\frac{\log 2k}{\log(\ell-1)}+1\) and let \(G\) be an \(\ell\)-regular graph of girth \(g>6d\). We claim that any vertex \(v\) in \(G\) is \(k\)-safe for \(\emptyset\).
To prove this, we show inductively that Robber can maintain the following invariant in any position \((F,u)\) reached during the game:
* for each edge \(e\in F\), neither end point of \(e\) is within distance \(d\) of \(u\) in \(G\).
Note that, from the assumption that \(k\geq\ell\) and \(d>\frac{\log 2k}{\log(\ell-1)}+1\), it follows that \(d\geq 2\). Thus, the invariant \((*)\) guarantees, in particular, that Cop does not win at any point.
Clearly the invariant \((*)\) is satisfied at the initial position, since \(F\) is empty. Suppose now that it is satisfied in some position \((F,u)\) and Cop chooses a set \(F^{\prime}\) in the next move. Let \(C\subseteq V\) denote the set of end points of all edges in \(F^{\prime}\). Since \(|F^{\prime}|\leq k\), we have \(|C|\leq 2k\).
Let \(N\subseteq V\) denote the collection of vertices which are at distance at most \(3d\) from \(u\). By the assumption on the girth of \(G\), the induced subgraph \(G[N]\) is a tree. We can consider it as a rooted tree, with root \(u\). Then, \(u\) has exactly \(\ell\) children. All vertices in \(N\) at distance less than \(3d\) from \(u\) have exactly \(\ell-1\) children (and one parent), and all vertices at distance exactly \(3d\) from \(u\) are leaves of the tree. This allows us to speak, for instance, of the subtree rooted at a vertex \(u^{\prime}\) meaning the subgraph of \(G\) induced by the vertices \(x\) in \(N\) such that the unique path from \(u\) to \(x\) in \(G[N]\) goes through \(u^{\prime}\).
Let \(u_{1},\ldots,u_{\ell}\) be the children of \(u\). For each \(i\), let \(U_{i}\) denote the set of descendants of \(u_{i}\) that are at distance exactly \(d\) from \(u\) (and so at distance \(d-1\) from \(u_{i}\)). Note that the collection \(U_{1},\ldots,U_{\ell}\) forms a partition of the set of vertices in \(N\) that are at distance exactly \(d\) from \(u\). Each \(x\in U_{i}\) is the root of a tree of height \(2d\). Moreover, since the tree below \(u_{i}\) is \((\ell-1)\)-regular, \(U_{i}\) contains exactly \((\ell-1)^{d-1}\) vertices. By the assumption that \(d>\frac{\log 2k}{\log(\ell-1)}+1\), it follows that \((\ell-1)^{d-1}>2k\geq|C|\) and therefore each \(U_{i}\) contains at least one vertex \(x_{i}\) such that the subtree rooted at \(x_{i}\) contains no vertex in \(C\). Let \(y_{i}\) be any descendant of \(x_{i}\) at distance \(d\) from \(x_{i}\) and let \(P_{i}\) denote the unique path in \(G[N]\) from \(u\) to \(y_{i}\). Robber's move is to play the paths \(P_{1},\ldots,P_{\ell}\). We now verify that this is a valid move, and that it maintains the required invariant \((*)\).
First, note that the paths \(P_{1},\ldots,P_{\ell}\) are paths in the tree \(G[N]\) all starting at \(u\) and the second vertex in path \(P_{i}\) is \(u_{i}\). It follows that the paths are pairwise edge disjoint. We next argue that no path \(P_{i}\) goes through an edge in \(F\cap F^{\prime}\). Indeed, by the inductive assumption, no endpoint of an edge
in \(F\) appears within distance \(d\) of \(u\) and therefore the path from \(u\) to \(x_{i}\) does not go through any such vertex. Moreover, by the choice of \(x_{i}\), no endpoint of an edge in \(F^{\prime}\) appears in the subtree rooted at \(x_{i}\) and therefore the path from \(x_{i}\) to \(y_{i}\) does not go through any such vertex. Together these ensure that the path \(P_{i}\) does not visit any vertex that is an endpoint of an edge in \(F\cap F^{\prime}\).
Finally, to see that the invariant \((*)\) is maintained, note that all vertices that are at distance at most \(d\) from \(y_{i}\) are in the subtree of \(G[N]\) rooted at \(x_{i}\). The choice of \(x_{i}\) means this contains no vertex in \(C\). This is exactly the condition that we wished to maintain. \(\Box\)
### Winning the game
We are now ready to prove that a winning strategy for Robber in \(\mathrm{CR}^{\,\ell}_{k}(G)\) generates a winning strategy for Duplicator in the game \(\mathrm{PG}^{\mathcal{N}_{\ell}}_{k}\) on the structures \(\mathfrak{A}^{\mathrm{ev}}_{\ell}(G)\) and \(\mathfrak{A}^{\mathrm{od}}_{\ell}(G)\).
**Theorem 29**: _Let \(G\) be a connected \(\ell\)-regular ordered graph. If \(v_{0}\) is \(k\)-safe for the empty set, then Duplicator has a winning strategy in the game \(\mathrm{PG}^{\mathcal{N}_{\ell}}_{k}(\mathfrak{A}^{\mathrm{ev}}_{\ell}(G), \mathfrak{A}^{\mathrm{od}}_{\ell}(G),\emptyset,\emptyset)\)._
_Proof._ We show that Duplicator can maintain the following invariant for all positions \((\alpha,\beta)\) obtained during the play of the game \(\mathrm{PG}^{\mathcal{N}_{\ell}}_{k}(\mathfrak{A}^{\mathrm{ev}}_{\ell}(G), \mathfrak{A}^{\mathrm{od}}_{\ell}(G),\emptyset,\emptyset)\):
**(\(\dagger\))**: There exists a bijection \(f\in\mathrm{GB}(F_{\alpha})\) such that \(p:=\alpha\mapsto\beta\subseteq f\) and \(\mathrm{tw}(f)\) is \(k\)-safe for \(F_{\alpha}\), where \(F_{\alpha}:=\{e\in E\mid\mathrm{rng}(\alpha)\cap\{e\}\times[2]\neq\emptyset\}\).
Note that if (\(\dagger\)) holds, then \(p\subseteq f\upharpoonright(F_{\alpha}\times[2])\) and, by Lemma 25, \(f\upharpoonright(F_{\alpha}\times[2])\in\mathrm{PI}(\mathfrak{A}^{\mathrm{ev}}_{ \ell}(G),\mathfrak{A}^{\mathrm{od}}_{\ell}(G))\), whence Spoiler does not win the game in position \((\alpha,\beta)\). Thus, maintaining the invariant (\(\dagger\)) during the play guarantees a win for Duplicator.
Note first that (\(\dagger\)) holds in the initial position \((\alpha,\beta)=(\emptyset,\emptyset)\) of the game: if \(f_{0}\in\mathrm{GB}\) is the bijection with \(\mathrm{tw}(f_{0})=v_{0}\), as \(\emptyset\mapsto\emptyset=\emptyset\subseteq f_{0}\) and \(\mathrm{tw}(f_{0})\) is \(k\)-safe for \(F_{\emptyset}=\emptyset\).
Assume then that (\(\dagger\)) holds for a position \((\alpha,\beta)\), and assume that Spoiler plays a left \(\mathcal{N}_{\ell}\)-quantifier move by choosing \(r\leq k\) and \(\vec{y}\in X^{r}\). Duplicator answers this by giving the bijection \(f^{-1}\). Let \(\vec{b}=(b_{i},\ldots,b_{r})\in A_{\ell}(G)^{r}\) be the second part of Spoiler's move, and let \(F^{\prime}\) be the set \(\{e\in E\mid\mathrm{rng}(\beta[\vec{b}/\vec{y}])\cap\{e\}\times[2]\neq\emptyset\}\). Since \(\mathrm{tw}(f)\) is \(k\)-safe for \(F_{\alpha}\), there are mutually disjoint \(E\setminus(F_{\alpha}\cap F^{\prime})\)-paths \(P_{i}\), \(i\in[\ell]\), from \(\mathrm{tw}(f)\) to some vertices \(u_{i}\) that are \(k\)-safe for the set \(F^{\prime}\). Let \(f_{P_{i}}\), \(i\in[\ell]\), be the good bijections obtained from \(f\) as explained before Definition 24. Now Duplicator answers the move \(\vec{b}\) of Spoiler by giving the set \(P=\{\vec{a}_{1},\ldots,\vec{a}_{\ell}\}\) of \(r\)-tuples, where \(\vec{a}_{i}:=f_{P_{i}}^{-1}(\vec{b})\) for each \(i\in[\ell]\).
To see that this is a legal move, observe that since the paths \(P_{i}\) are disjoint, for each \(j\in[r]\) there is at most one \(i\in[\ell]\) such that \(f_{P_{i}}^{-1}(b_{j})\neq f^{-1}(b_{j})\). Thus we have \(\hat{q}(\vec{a}_{1},\ldots,\vec{a}_{\ell})=f^{-1}(\vec{b})\), and hence \(f^{-1}(\vec{b})\in q(P)\subseteq\Gamma_{q}^{\omega}(P)\) for \(q=n_{A_{\ell}(G)}^{\ell}\), as required. Let Spoiler complete the round of the game by choosing \(i\in[\ell]\); thus, the next position is \((\alpha^{\prime},\beta^{\prime}):=(\alpha[\vec{a}_{i}/\vec{y}],\beta[\vec{b}/ \vec{y}])\). It suffices now to show that (\(\dagger\)) holds for the position \((\alpha^{\prime},\beta^{\prime})\) and the bijection \(f^{\prime}:=f_{P_{i}}\).
Note first that \(F_{\alpha^{\prime}}=F^{\prime}\), since clearly \(\mathrm{rng}(\alpha[\vec{a}_{i}/\vec{y}])\cap\{e\}\times[2]\neq\emptyset\) if, and only if, \(\mathrm{rng}(\beta[\vec{b}/\vec{y}])\cap\{e\}\times[2]\neq\emptyset\). Thus, \(\mathrm{tw}(f^{\prime})=u_{i}\) is \(k\)-safe for \(F_{\alpha^{\prime}}\). This implies that \(f^{\prime}\in\mathrm{GB}(F_{\alpha^{\prime}})\), since otherwise by Definition 26, Cop would win the game \(\mathrm{CR}_{k}(G)\) immediately in position \((F_{\alpha^{\prime}},\mathrm{tw}(f^{\prime}))\). It remains to show that \(p^{\prime}:=\alpha^{\prime}\mapsto\beta^{\prime}\) is contained in \(f^{\prime}\). For all components \(a_{i}^{j}\) of \(\vec{a}_{i}\) we have \(p^{\prime}(a_{i}^{j})=b_{j}=f^{\prime}(a_{i}^{j})\) by definition of \(\vec{a}_{i}\). On the other hand, for any element \(a\in\mathrm{dom}(p^{\prime})\setminus\{a_{1}^{i},\ldots,a_{i}^{r}\}\) we have \(p^{\prime}(a)=p(a)=f(a)\). Furthermore, since the path \(P_{i}\) does not contain any edges in \(F_{\alpha}\cap F_{\alpha^{\prime}}\), we have \(f^{\prime}\upharpoonright(F_{\alpha}\cap F_{\alpha^{\prime}})\times[2]=f\upharpoonright(F_{ \alpha}\cap F_{\alpha^{\prime}})\times[2]\), and since clearly \(a\in(F_{\alpha}\cap F_{\alpha^{\prime}})\times[2]\), we see that \(f^{\prime}(a)=f(a)\). Thus, \(p^{\prime}(a)=f^{\prime}(a)\).
The case where Spoiler plays a right \(\mathcal{N}_{\ell}\)-quantifier move is similar. \(\Box\)
Note that the vocabulary of the structures \(\mathfrak{A}^{\mathrm{ev}}_{\ell}(G)\) and \(\mathfrak{A}^{\mathrm{od}}_{\ell}(G)\) consists of two \(\ell\)-ary relation symbols. The presence of at least \(\ell\)-ary relations is actually necessary: Duplicator cannot have
a winning strategy in \(\mathrm{PG}_{\ell-1}^{\mathcal{N}_{\ell}}\) on structures containing only relations of arity less than \(\ell\), since by Corollary 10(b), all properties of such structures are definable in \(L_{\infty\omega}^{\ell-1}(\mathbf{Q}_{\mathcal{N}_{\ell}})\).
From Lemma 19, Theorem 28 and Theorem 29, we immediately obtain the result.
**Theorem 30**: _For any \(\ell\geq 3\), \(\mathsf{CSP}(\mathfrak{E}_{\ell})\) is not definable in \(L_{\infty\omega}^{\omega}(\mathbf{Q}_{\mathcal{N}_{\ell}})\)._
Note that \(\mathsf{CSP}(\mathfrak{E}_{\ell})\) corresponds to solving systems of linear equations over \(\mathbb{Z}\,/2\,\mathbb{Z}\) with all equations containing (at most) \(\ell\) variables. Thus, as a corollary we see that solvability of such systems of equations cannot be expressed in \(L_{\infty\omega}^{\omega}(\mathbf{Q}_{\mathcal{N}_{\ell}})\) for any \(\ell\). Furthermore, since systems of linear equations over \(\mathbb{Z}\,/2\,\mathbb{Z}\) can be solved in polynomial time, we see that the complexity class PTIME is not contained in \(L_{\infty\omega}^{\omega}(\mathbf{Q}_{\mathcal{N}_{\ell}})\) for any \(\ell\).
Finally, note that since the class \(\mathsf{CSP}(\mathfrak{E}_{\ell})\) is downwards monotone, by Lemma 9 the quantifier \(Q_{\mathsf{CSP}(\mathfrak{E}_{\ell})}\) is \(\mathcal{N}_{\ell+1}\)-closed. Thus, we get the following hierarchy result for the near-unanimity families \(\mathcal{N}_{\ell}\) with respect to the arity \(\ell\) of the partial functions.
**Theorem 31**: _For every \(\ell\geq 3\) there is a quantifier in \(\mathbf{Q}_{\mathcal{N}_{\ell+1}}\) which is not definable in \(L_{\infty\omega}^{\omega}(\mathbf{Q}_{\mathcal{N}_{\ell}})\)._
## 6 Conclusion
We have introduced new methods, in the form of pebble games, for proving inexpressibility in logics extended with generalized quantifiers. There is special interest in proving inexpressibility in logics with quantifiers of unbounded arity. We introduced a general method of defining such collections inspired by the equational theories of polymorphisms arising in the study of constraint satisfaction problems. Perhaps surprisingly, while the collection of CSP that have near-unanimity polymorphisms is rather limited (as they all have bounded width), the collection of quantifiers with the corresponding closure property is much richer, including even CSP that are intractable. The pebble game gives a general method of proving inexpressibility that works for a wide variety of closure conditions. We were able to deploy it to prove that solvability of systems of equations over \(\mathbb{Z}\,/2\,\mathbb{Z}\) is not definable using only quantifiers closed under near-unanimity conditions.
It would be interesting to use the pebble games we have defined to show undefinability with other collections of quantifiers closed under partial polymorphisms. Showing some class is not definable with quantifiers closed under partial Maltsev polymorphisms would be especially instructive. It would require using the pebble games with a construction that looks radically different from the CFI-like constructions most often used. This is because CFI constructions encode problems of solvability of equations over finite fields (or more generally finite rings), and all of these problems are Maltsev-closed.
| Lindstrom quantifiers を研究し、特定の閉包性質を満たすものを探求しています。これは、制約充足問題(CSP)の研究の動機付けとなっています。有限構造 B の多様性に関連して、多様性の代数が特定の式を満たすと、B に HomoMorphically 対応する構造のクラスには自然な閉包条件が生じます。特定の式を満たす Q を満たす集合は、CSP のものより一般的なもので、ある条件 P に対して、Pebble ゲームを定義し、無限論理の識別能力を決定します。すべての Q が P 閉合である無限論理における線形方程式の決定問題が、近似の均等性条件を満たす Q 閉合である無限論理で表現できないことを示すために、Pebble ゲームを使用しました。 |
2306.04456 | Tree models for assessing covariate-dependent method agreement | Method comparison studies explore the agreement of measurements made by two
or more methods. Commonly, agreement is evaluated by the well-established
Bland-Altman analysis. However, the underlying assumption is that differences
between measurements are identically distributed for all observational units
and in all application settings. We introduce the concept of conditional method
agreement and propose a respective modeling approach to alleviate this
constraint. Therefore, the Bland-Altman analysis is embedded in the framework
of recursive partitioning to explicitly define subgroups with heterogeneous
agreement in dependence of covariates in an exploratory analysis. Three
different modeling approaches, conditional inference trees with an appropriate
transformation of the modeled differences (CTreeTrafo), distributional
regression trees (DistTree), and model-based trees (MOB) are considered. The
performance of these models is evaluated in terms of type-I error probability
and power in several simulation studies. Further, the adjusted rand index (ARI)
is used to quantify the models' ability to uncover given subgroups. An
application example to real data of accelerometer device measurements is used
to demonstrate the applicability. Additionally, a two-sample Bland-Altman test
is proposed for exploratory or confirmatory hypothesis testing of differences
in agreement between subgroups. Results indicate that all models were able to
detect given subgroups with high accuracy as the sample size increased.
Relevant covariates that may affect agreement could be detected in the
application to accelerometer data. We conclude that conditional method
agreement trees (COAT) enable the exploratory analysis of method agreement in
dependence of covariates and the respective exploratory or confirmatory
hypothesis testing of group differences. It is made publicly available through
the R package coat. | Siranush Karapetyan, Achim Zeileis, André Henriksen, Alexander Hapfelmeier | 2023-06-07T14:29:45 | http://arxiv.org/abs/2306.04456v1 | # Tree models for assessing covariate-dependent method agreement
###### Abstract
Method comparison studies explore the agreement of measurements made by two or more methods. Commonly, agreement is evaluated by the well-established Bland-Altman analysis. However, the underlying assumption is that differences between measurements are identically distributed for all observational units and in all application settings. We introduce the concept of conditional method agreement and propose a respective modeling approach to alleviate this constraint. Therefore, the Bland-Altman analysis is embedded in the framework of recursive partitioning to explicitly define subgroups with heterogeneous agreement in dependence of covariates in an exploratory analysis. Three different modeling approaches, conditional inference trees with an appropriate transformation of the modeled differences (CTreeTrafo), distributional regression trees (DistTree), and model-based trees (MOB) are considered. The performance of these models is evaluated in terms of type-I error probability and power in several simulation studies. Further, the adjusted rand index (ARI) is used to quantify the models' ability to uncover given subgroups. An application example to real data of accelerometer device measurements is used to demonstrate the applicability. Additionally, a two-sample Bland-Altman test is proposed for exploratory or confirmatory hypothesis testing of differences in agreement between subgroups. Results indicate that all models were able to detect given subgroups with high accuracy as the sample size increased. Relevant covariates that may affect agreement could be detected in the application to accelerometer data. We conclude that conditional method agreement trees (COAT) enable the exploratory analysis of method agreement in dependence of covariates and the respective exploratory or confirmatory hypothesis testing of group differences. It is made publicly available through the R package coat.
Bland-Altman analysis, hypothesis testing, recursive partitioning, subgroup analysis.
## 1 Introduction
Method comparison studies are relevant in all scientific fields whenever the agreement of continuously scaled measurements made by two or more methods is to be investigated. However, they have found particular application in medical research, for example in laboratory research [1, 2], anaestbiology [3], ophthalmology [4] and pathology [5] among many others. Here, taking measurements can be time-consuming, expensive, invasive or stressful for patients. Therefore, methods are constantly being developed and improved to reduce these shortcomings [4]. However, the agreement between a new method and a standard method needs to be shown in order to replace the latter. A
well-established methodology for analysis was developed by Bland and Altman and is known as the Bland-Altman analysis or plot [6]. In its most basic form, it illustrates the differences against the mean values of paired measurements made by two methods. Here, two quantities of interest are the mean difference, referred to as 'bias', and the standard deviation of the differences, which is used to determine the width of the so called 'Limits of Agreement' (LoA) [7]. The bias is a measure of the overall deviation of the methods but has limited interpretability, since large positive and negative deviations can still add up to a small overall bias. Further, as a summary measure, the expected agreement of a single subject's measurements cannot be inferred from the bias. Therefore, Bland and Altman proposed to estimate the LoA, that is a prediction interval in which about \(95\%\) of individual differences between the measurements of the two methods are expected to lie. The mean and standard deviation of differences can be calculated directly from the observed data but it has also been suggested to use regression modeling under the assumption of normally distributed residuals [8, 9].
Proper planning, conduct, interpretation and reporting of method comparison studies has been the subject of ongoing research and recommendations have been provided in respective publications and through reviews of the relevant literature [10, 5, 4, 7, 2, 1, 3, 11, 12]. These works are also concerned with the data description, processing and analysis, the plotting of results, the (pre)specification of acceptable agreement, the precision of estimation, the repeatability of measurements and the investigation of homoscedastic variances and trends. Regarding the latter two, Bland and Altman already discussed early the question whether the agreement between the methods depends on the magnitude of the measured values, that is whether there is a relationship between the differences and the means of paired values [6, 13]. In that case, they suggested either transforming (e.g. log-transforming) the differences to remove the dependency or modeling the differences with mean values as explanatory variable in a linear regression model.
In the present work, we suppose that the underlying assumption of a Bland-Altman analysis, that is that the agreement of methods is identically distributed for all observational units or subjects, may not be valid in any case. The basic idea is that the methods' measurements can be affected by internal and external factors, such as the subjects' characteristics and measurement settings, with direct implications on the agreement of methods. Previous studies have used heuristic approaches to address this issue, for example through the post-hoc fitting of additional regression models and subgroup analyses [14, 15]. An early example is the regression of mean values on differences as originally suggested by Bland and Altman and outlined above [6, 13].
Here, we introduce a unifying framework and analysis approach for conditional method agreement in case of single measurements per subject or observational unit. Recursive partitioning is used to simultaneously explore relations between covariates and agreement and to define corresponding subgroups with heterogeneous agreement in terms of bias and/or the width of LoA, taking advantage of the fact that a Bland-Altman analysis can be parameterized accordingly [8, 9, 16]. We consider three different modeling approaches, that is conditional inference trees with an appropriate transformation of the outcome (CTreeTrafo) [17], distributional regression trees (DistTree) [18], and model-based trees (MOB) [19]. The ability of these approaches to control the type-I error probability at a nominal level, the power to detect given subgroups, and the ability to accurately define these subgroups is investigated in simulation studies. We also demonstrate the relevance to medical research through applications to a real data example of accelerometer measurements made by different devices. In addition, we propose a two-sample Bland-Altman test suitable for exploratory or confirmatory hypothesis testing of differences in agreement between two (pre)defined subgroups.
## 2 Methods
The following subsections outline the concept of conditional method agreement, corresponding modeling through recursive partitioning and a two-sample Bland-Altman test for hypothesis testing of group differences in agreement. The models used for analysis are called conditional method agreement trees (COAT).
### Introductory example
The concept of conditional method agreement is briefly illustrated here using a real data example, which is explained in more detail in Section 4. The data consists of 24-hour accelerometer measurements and socio-demographic information from \(n=50\) participants of the original study [20]. Figure 0(a) shows a respective Bland-Altman plot of the agreement of activity energy expenditure (Kilocalories) measured by two investigated devices. Using COAT by MOB, it can be shown that this agreement is related to the age of the participants. There are two subgroups with statistically significantly different agreement (\(p=0.023\)), especially in terms of bias, which is divided from \(-385\) in the whole sample into \(-536\) and \(-207\) in the subgroups defined by a split point of \(41\) years (cf. Figure 0(b)). Also, the LoA within the defined subgroups are less wide than for the whole sample. Comparing the subgroups, the LoA are wider within subjects of increased age of \(>41\) years. This result can be of interest to scientists, health professionals, users and
manufacturers of accelerometers who develop the devices or rely on their functionality and who may want to discuss the reasons for this difference in agreement and possible solutions or implications for proper use.
### Conditional method agreement
As shown in the previous example and discussed in Section 1, in a Bland-Altman analysis we are essentially interested in the first and second moments of the marginal density function \(f_{Y}(y)\). Here, \(Y=M_{1}-M_{2}\) is a random variable of independent and identically distributed (iid) differences between two methods' paired measurements \((M_{1},M_{2})\). The moments of \(f_{Y}(y)\) are the expectation \(\mathbb{E}(Y)\) and the variance \(\mathrm{Var}(\mathrm{Y})\) with corresponding estimates given by the mean \(\bar{y}=\sum_{i=1}^{n}y_{i}\) and the empirical variance \(s^{2}=\frac{1}{n-1}\sum_{i=1}^{n}(y_{i}-\bar{y})^{2}\) of the observed differences \(y_{i}\), \(i\in\{1,\ldots,n\}\), of \(n\) subjects or observational units. Thereby, \(\bar{y}\) describes the overall deviation between methods, which is often referred to as the 'bias' [7]. However, as discussed in Section 1, the bias is of limited use because it does not provide information about the individual agreement of measurements made for the same subject or observational unit. Interpretation of agreement in a Bland-Altman plot therefore relies mainly on 95% prediction intervals, that is the LoA, which are calculated using \(\bar{y}\) and \(s\) with an appropriate distributional assumption about \(Y\). Most often, a normal distribution or t distribution is assumed.
Another assumption of a Bland-Altman analysis is that \(\mathbb{E}(Y)\) and \(\mathrm{Var}(\mathrm{Y})\) are independent of the magnitude of measurements, implying that the differences are iid. However, if the observed distribution of data suggests that such an association has to be assumed, Bland and Altman propose either to remove this relationship by transforming the differences, for example to establish homoscedasticity by using a log-transformation, or to use a regression model considering the differences \(Y\) as the outcome and the mean measurements \(M=\frac{1}{2}(M_{1}+M_{2})\) as an explanatory variable [21]. We generalize this approach to define conditional method agreement as follows.
Given a random variable \(Y=M_{1}-M_{2}\) of differences between two methods' measurements \(M_{1}\) and \(M_{2}\) and a multivariable vector of covariates \(X\), conditional method agreement can be formalized as
\[f_{Y}(y|x)\neq f_{Y}(y).\]
Here \(f_{Y}(y|x)\) is the conditional density function of \(Y\) given \(X=x\). The realizations \(y\) are the observed differences and \(x\) are the measured covariate values which can also include mean values \(m=\frac{1}{2}(m_{1}+m_{2})\) of paired measurements. In the present work, we use COAT to obtain estimates of the conditional expectation \(\mathbb{E}(Y|X)\) and the conditional variance \(\mathrm{Var}(\mathrm{Y}|\mathrm{X})\) to assess conditional method agreement, with and without using distributional assumptions about \(f_{Y}(y|x)\). Respective null-hypotheses
\[H_{0}:\mathbb{E}(Y|X)=\mathbb{E}(Y)\;\cap\;\mathrm{Var}(\mathrm{Y}|\mathrm{X} )=\mathrm{Var}(\mathrm{Y}) \tag{1}\]
or each of
\[H_{0}:\mathbb{E}(Y|X)=\mathbb{E}(Y)\;\;\text{and}\;\;H_{0}:\mathrm{Var}( \mathrm{Y}|\mathrm{X})=\mathrm{Var}(\mathrm{Y}) \tag{2}\]
Figure 1: Agreement (1a) and conditional agreement (1b) of activity energy expenditure (AEE) (kilocalories) measured by two different accelerometers.
are tested by COAT to determine the statistical significance of conditional estimates. The procedure can also be used to perform a two-sample 'Bland-Altman test' to compare agreement between subgroups.
### Recursive partitioning of method agreement
The general idea of recursive partitioning is to assess sequentially whether an investigated outcome variable (or model) is homogeneous across all available covariates and, if this is not the case, to capture the differences by splits into more homogeneous subsets of the data [22]. The procedure continues recursively until some kind of stopping criterion is reached. The resulting model is often referred to as a tree because of its structure. The subsets considered for splitting or emerging from splitting are termed parent nodes or daughter/child nodes, respectively. A so called stump is obtained if a single split is performed. The definition of the splits performed in the covariates provides decision rules that specify the subsets.
To define heterogeneous subsets in terms of \(\mathbb{E}(Y|X)\) and \(\mathrm{Var}(\mathrm{Y|X})\), referring to the mean (bias) and standard deviation of the differences \(y\), we consider the following tree-based algorithms: conditional inference tree with an appropriate transformation of the outcome (CTreeTrafo) [17], distributional tree (DistTree) [18], and model-based recursive partitioning (MOB) [19]. All of these modeling approaches are based on the same basic steps [23]:
1. A model is fit to the entire data by optimizing some objective function or a transformation function is defined.
2. A split variable is selected based on the association of some goodness-of-fit measure with each possible variable. The variable with the highest significant association is selected.
3. A split point is chosen so the goodness-of-fit is maximized in the resulting subsets.
4. Steps \(1.-3.\) are repeated until no more significant associations are found or the resulting sample is too small for further splits.
The basic algorithm of the three models considered is thus similar. However, they differ in the implementation of the individual steps, as explained in more detail in the following. Default features of all of the aforementioned models are summarized in Table 1.
#### 2.3.1 Conditional inference tree
The algorithm uses permutation tests [24, 17], asymptotic by default, to explore whether there is a statistically significant dependence of the outcome on a covariate. Therefore, \(j\) partial hypotheses of independence \(H_{0}^{j}:f_{Y}(y|x_{j})=f_{Y}(y)\) are defined for \(j=1,...,J\) covariates. The respective linear test statistics is
\[t_{j}=vec\left(\sum_{i=1}^{n}\omega_{i}g_{j}(x_{ji})h(y_{i},(y_{1},...,y_{n}) )^{\top}\right)\in\mathds{R}^{pq},\]
where \(\omega_{i}\) is a case weight of zero or one, indicating the correspondence of an observation to the node or subset in which the test is performed. \(g_{j}(\cdot)\) and \(h(\cdot)\) represent non-random transformation functions. The choice of \(g_{j}(\cdot)\) depends on the type of the \(j\)-th covariate. The identity function, \(g_{j}(x_{ji})=x_{ji}\), is a natural choice for a continuous variable, while the indicator function \(g_{j}(x_{ji})=(I(x_{ji}=1),...,I(x_{ji}=K))\) is more appropriate for a categorical variable with \(K\) levels. With the \(vec(\cdot)\) operator, the test statistic becomes a \(pq\) column vector, where \(p=K\) for categorical covariates and \(p=1\) for continuous covariates with identity transformation. \(q\) depends on the choice of \(h(\cdot)\) and takes a value of \(2\) in our case, as outlined below.
In the present setting, that is to model method agreement through the estimation of \(\mathbb{E}(Y|X)\) and \(\mathrm{Var}(\mathrm{Y|X})\), we define \(h(\cdot)=(y_{i},(y_{i}-\overline{y}_{\omega})^{2})\), which corresponds to the first step in the basic algorithm. The respective test statistic \(t_{j}\) is then defined as
\begin{table}
\begin{tabular}{l l l l l} \hline & Fit & Test & Statistic & Transformation \\ \hline CTreeTrafo & non-parametric & permutation & quadratic & \((y_{i},(y_{i}-\overline{y}_{\omega})^{2})\) \\ DistTree & parametric & permutation & quadratic & \(s(\boldsymbol{\hat{\theta}},y_{i})\) \\ MOB & parametric & fluctuation & quadratic & \(s(\boldsymbol{\hat{\theta}},y_{i})\) \\ \hline \end{tabular}
\end{table}
Table 1: Characteristics of the considered COAT models.
\[t_{j}=vec\left(\sum_{i=1}^{n}\omega_{i}g_{j}(x_{ji})(y_{i},(y_{i}-\overline{y}_{ \omega})^{2})^{\top}\right)\in\mathds{R}^{\prime 2},\]
where \(\overline{y}_{\omega}=\sum_{i=1}^{n}\omega_{i}y_{i}/\sum_{i=1}^{n}\omega_{i}\) is the mean outcome in the node or subset in which the test is performed. The conditional expectation \(\mu_{j}\) and the covariance \(\Sigma_{j}\) of \(t_{j}\) under the null hypothesis \(H_{0}^{j}\) can be used to obtain the standardized test statistic
\[c_{max}(t_{j},\mu_{j},\Sigma_{j})=\max_{z=1,...,p,2}\left|\frac{(t_{j}-\mu_{j}) _{z}}{\sqrt{(\Sigma_{j})_{zz}}}\right|,\]
which follows an asymptotic normal distribution. As an alternative, a quadratic form
\[c_{quad}(t_{j},\mu_{j},\Sigma_{j})=(t_{j}-\mu_{j})\Sigma_{j}^{+}(t_{j}-\mu_{j })^{\top},\]
can also be used, where the asymptotic conditional distribution is \(\chi^{2}\) with degrees of freedom given by the rank of \(\Sigma_{j}\). \(\Sigma_{j}^{+}\) is the Moore-Penrose inverse of \(\Sigma_{j}\). Standardization of the linear test statistic enables the computation of a \(p\)-value, where \(H_{j}^{0}\) can be rejected if this value falls below a specified significance level. The \(j\)-th covariate with the minimum and statistically significant \(p\)-value is selected for splitting, corresponding to the second step in the basic algorithm. Note, that the multiple testing problem is present, as hypotheses for several covariates are checked. Therefore, the CTree algorithm uses Bonferroni-adjusted \(p\)-values by default [17].
After selecting the split variable \(j^{*}\), the subsequent and third step of the basic algorithm is to find the optimal split point in a continuous variable or dichotomization of the \(K\) categories of a categorical variable for binary splitting, which is again determined through a linear test statistic
\[t_{j^{*}}^{A}=vec\left(\sum_{i=1}^{n}\omega_{i}I(x_{j^{*}i\in A})(y_{i},(y_{i }-\overline{y}_{\omega})^{2})^{\top}\right)\in\mathds{R}^{2}.\]
Here, \(t_{j^{*}}^{A}\) implicitly measures the discrepancy between the subsets \(\{y_{i}|\omega_{i}=1\text{ and }x_{j^{*}i}\in A;i=1,...,n\}\) and \(\{y_{i}|\omega_{i}=1\text{ and }x_{j^{*}i}\notin A;i=1,...,n\}\) in terms of a metric defined by \(h(\cdot)\). The best split point is found by maximizing
\[A^{*}=\arg\max_{A}c(t_{j^{*}}^{A},\mu_{j^{*}}^{A},\Sigma_{j^{*}}^{A}),\]
over all possible subsets \(A\) using the conditional expectation \(\mu_{j^{*}}^{A}\) and covariance \(\Sigma_{j^{*}}^{A}\) of \(t_{j^{*}}^{A}\). This procedure is recursively repeated until no further statistically significant associations are found or subsets become too small for further splitting (which is the fourth step in the basic algorithm).
#### 2.3.2 Distributional tree
DistTree is similar to CTreeTrafo while a parametric model is fit to the data and the transformation function is replaced with the resulting score function. In particular, DistTree models all parameters of a given distribution [18]. In the present setting, it is reasonable to assume a normal distribution with the location and scale parameters \(\mu\) and \(\sigma^{2}\) for the differences \(Y\)[13]. This allows the specification of the corresponding log-likelihood
\[l(\boldsymbol{\theta};Y)=\log\left\{\frac{1}{\sigma\sqrt{2\pi}}\phi\left( \frac{Y-\mu}{\sigma}\right)\right\};\quad\boldsymbol{\theta}=(\mu,\sigma),\]
and its score function \(s(\boldsymbol{\theta},Y)=\partial l(\boldsymbol{\theta};Y)/\partial\boldsymbol {\theta}\) as a measure of goodness-of-fit. \(\phi(\cdot)\) is the density function of a standard normal distribution. A maximum likelihood (ML) estimate of \(\boldsymbol{\theta}\) is \(\hat{\boldsymbol{\theta}}=\arg\max\sum_{i=1}^{n}l(\boldsymbol{\theta};y_{i})\). This corresponds to the first step in the basic algorithm.
When it is assumed that the differences y are not iid, DistTree can be used to model the conditional expectation \(\mathbb{E}(Y|X)\) and variance \(\mathrm{Var}(\mathrm{Y|X})\). To do so, a possible association of \(\boldsymbol{\theta}\) and a covariate \(X_{j}\) is tested in terms of the null-hypothesis \(H_{0}^{j}:s(\boldsymbol{\theta},Y)\perp X_{j}\), based on the test statistic
\[t_{j}=vec\left(\sum_{i=1}^{n}g_{j}(x_{ji})s(\hat{\mathbf{\theta}},y_{i})\right).\]
Here, \(\hat{\mathbf{\theta}}\) is substituted into the score function to obtain \(s(\hat{\mathbf{\theta}},y_{i})\) as a measure of goodness-of-fit for each of the observations \(y_{i}\). The transformation function \(g_{j}\), as well as the standardized test statistics \(c_{quad}(t_{j},\mu_{j},\Sigma_{j})\) and \(c_{max}(t_{j},\mu_{j},\Sigma_{j})\) are defined as outlined in Section 2.3.1. The split variable \(X_{j^{*}}\) is determined by the lowest and statistically significant p-value, which is by default corrected for multiple testing (equals step 2 of the basic algorithm). In the third step the split point is chosen so that it leads to the largest discrepancy in the sum of scores between the resulting subsets. This procedure is repeated recursively in each subset until no further significant associations are found or the resulting subsets become too small for further splitting.
It is important at this point to draw attention to the similarity of the statistics \(t_{j}\) of CTreeTrafo and DistTree, with CTreeTrafo using a transformation function \(h(\cdot)\) instead of the score function \(s(\cdot)\) in the calculation. We show the equality of the resulting quadratic test statistics \(c_{quad}(\cdot)\) of CTreeTrafo (with the transformation function \(h(\cdot)\) defined as given in the previous Section 2.3.1) and DistTree analytically for the case of a continuous predictor in appendix A.1.
#### 2.3.3 Model-based recursive partitioning
MOB is similar to DistTree, but uses a different underlying model and hypothesis test. MOB uses fluctuation tests for parameter instability in regression model fits to build a tree model [25]. In the first step of MOB, a parametric model is fit to the data by maximum likelihood estimation. In the present case, we consider an intercept-only linear regression model \(y_{i}=\beta_{0}+\epsilon_{i}\), \(\epsilon_{i}\sim\mathcal{N}(0,\sigma)\), to obtain estimates of the expectation \(\mathbb{E}(Y)=\beta_{0}\) and variance \(\mathrm{Var}(\mathrm{Y})=\sigma^{2}\). The second step is to assess parameter instability of the estimated model parameters \(\hat{\theta}=(\widehat{\beta}_{0},\widehat{\sigma})\) across the values \(x_{j}\) of a potential split variable \(X_{j}\). Instability is concluded when the scores \(s(\hat{\mathbf{\theta}},y_{i})\) do not fluctuate randomly along the ordered values \(x_{j}\) [see 19, for details]. The split variable \(X_{j}^{*}\) is selected as it provides the minimal and statistically significant \(p\)-value, which is by default corrected for multiple testing. The split in \(x_{j}^{*}\) is determined so it maximizes the sum of the log-likelihoods of models that are refit to the resulting subsets, corresponding to the third step in the basic algorithm. As with CTreeTrafo and DistTree, the procedure is repeated recursively in each subset until no further significant associations are found or the resulting subsets become too small for further splitting.
## 3 Simulation studies
### Design
We conduct simulation studies to investigate the performance of COAT. For each of the defined scenarios, we run \(10000\) simulations, and consider sample sizes \(n\in\{50,100,150,\ldots,1000\}\), which are common in medical research. A CTree with the default transformation function \(h(y_{i},(y_{i},\ldots,y_{n}))=y_{i}\), as implemented through the function ctree() of the R package partykit[26], is used as a benchmark. Due to the equivalence of the statistics \(t_{j}\) of CTreeTrafo and DistTree, they are also referred to jointly as CTreeTrafo/DistTree in the following.
The assessment of performance is based on the type-I error and the power to reject \(H_{0}\) as defined in (1) and (2), and the Adjusted Rand Index (ARI). The latter is a measure of concordance of two classifications [27], as it quantifies the proportion of paired observations that belong to the same or different class levels in either classification among the total number of paired observations [28]. In the case of independent or random classifications, the ARI takes a value of \(0\). Higher values indicate a higher concordance, with 1 indicating perfect agreement. In the present simulations studies, the ARI is used to assess the concordance between the given subgroups and the subgroups defined by COAT. Three different simulation scenarios are considered as follows.
In the Null Case, the method agreement does not depend on any covariates. The simulated data consists of six independent, standard-normally distributed variables including the outcome \(Y\), which is the simulated differences between the methods, and five uninformative covariates \(X\). The Null Case allows the exploration of the type I error as we look for statistically significant p-values in the root nodes of the COAT models that were fit to the simulated data. The nominal significance level is set to \(\alpha=0.05\).
The Stump Case covers three different scenarios. In each of them there are five standard-normally distributed covariates \(X\), where method agreement depends on the informative covariate \(X_{1}\) such that \(Y\sim\mathcal{N}(\mu_{k},\sigma_{k})\), \(k\in\{1,2,3\}\), where
\[(\mu_{k},\sigma_{k})=\begin{cases}(\mu_{1}=0.3\cdot I(X_{1}>Q_{0.25}),\sigma_{1}=1)& \text{if }k=1,\\ (\mu_{2}=0,\sigma_{2}=1+I(X_{1}>Q_{0.25}))&\text{if }k=2,\\ (\mu_{3}=0.4\cdot I(X_{1}>Q_{0.25}),\sigma_{3}=1+I(X_{1}>Q_{0.25}))&\text{if }k=3. \end{cases}\]
Here, \(Q_{0.25}\) is the \(25\)th percentile of the standard normal distribution and has been chosen as a split point in \(X_{1}\) to create subgroups that approximately comprise \(25\%\) and \(75\%\) of the observations. The subgroups consequently differ only in \(\mu_{k}=\mathbb{E}(Y|X)\), that is in the bias of method agreement in the scenario \(k=1\), they differ in \(\sigma_{k}=\operatorname{Var}(\operatorname{Y}|\operatorname{X})\), that is in the width of the LoA in the scenario \(k=2\), and they differ in both quantities in the scenario \(k=3\). See also Figure 1(a) for a respective illustration. The performance of COAT is assessed in terms of its power to reject the null-hypothesis (1) for the informative covariate \(X_{1}\), and to uncover the correct subgroups as measured by the ARI. In this respect, the values of \(\mu_{k}\) and \(\sigma_{k}\) have been chosen in such a way that the power of a respective two-sample t-test would range between \(0.372\) and \(0.995\) for the given sample sizes [29].
Finally, in the Tree Case, we again consider an outcome \(Y\sim\mathcal{N}(\mu_{k},\sigma_{k})\) with \(k\in\{1,2\}\) and two informative, \(X_{1}\) and \(X_{2}\), and three uninformative, \(X_{3}\), \(X_{4}\) and \(X_{5}\), standard-normally distributed covariates, resulting in three or four subgroups (see Figure 1(b)), according to
\[(\mu_{k},\sigma_{k})=\begin{cases}(\mu_{1}=0.3\cdot I(X_{2}\geq Q_{0.75})+0.5 \cdot I(X_{2}<Q_{0.75})\cdot\\ I(X_{1}\geq Q_{0.4}),\sigma_{1}=1+I(X_{2}\geq Q_{0.75}))&\text{if }k=1,\\ (\mu_{2}=0.5\cdot I(X_{1}\geq Q_{0.4}),\sigma_{2}=1+I(X_{2}\geq Q_{0.6}))&\text {if }k=2.\end{cases}\]
The values of \(\mu_{k}\) and \(\sigma_{k}\) in scenario \(k=1\) have been chosen such that it offers a first split with respect to \(\sigma_{1}^{2}=\operatorname{Var}(\operatorname{Y}|\operatorname{X})\), which deviates between the subgroups defined by the split point \(Q_{0.75}\) in \(X_{2}\), while \(\mu_{1}\) takes the same value \(0.4\cdot 0+0.6\cdot 0.5=0.3\) on both sides of this split point. Subsequently, a second split could be performed with respect to \(\mu_{1}=\mathbb{E}(Y|X)\) as it differs between the subgroups defined by the split point \(Q_{0.4}\) in \(X_{1}\) where \(X_{2}<Q_{0.75}\). In the second scenario, the split point \(Q_{0.6}\) in \(X_{2}\) defines a split with respect to \(\sigma_{2}^{2}=\operatorname{Var}(\operatorname{Y}|\operatorname{X})\), and the split point \(Q_{0.4}\) in \(X_{1}\) defines a split with respect to \(\mu_{2}=\mathbb{E}(Y|X)\), resulting in four subgroups (see Figure 1(b)).
### Results
We first investigate the estimated type-I error probabilities of COAT in dependence of sample size in the Null Case. CTree and COAT by CTreeTrafo/DistTree show similar performance with relative rejection frequencies of the null-hypothesis reaching from \(3.8\%\) to \(5.5\%\), which are close to the nominal significance level of \(0.05\) and appear to be independent of sample size (Figure 3). Please note that CTree only tests the first null-hypothesis in (2) while COAT by CTreeTrafo/DistTree tests the null-hypothesis (1). On the contrary, the COAT implementation by MOB does not seem to exploit the nominal significance level of \(0.05\) well for smaller sample sizes as it rejects the null-hypothesis (1) in only \(1.3\%\) and \(3.3\%\) of the simulated cases with \(n\leq 100\). With larger sample sizes of \(n\geq 200\), it showed relative frequencies for the type-I error between \(5.1\%\) and \(5.8\%\), which are slightly but clearly increased beyond the nominal significance level of \(0.05\).
The performance of COAT in the Stump Case in terms of the power to reject the null-hypothesis (1) for the informative covariate \(X_{1}\) is estimated by the respective relative frequencies of the association of \(X_{1}\) and the outcome being significant at the \(5\%\) level in the root node of the tree models (Figure 4). When only the expectation \(\mu_{1}\) but not the variance \(\sigma_{1}^{2}\) varies between the defined subgroups (i.e. scenario \(k=1\)), CTree and MOB perform best. However, for the case where only the variance \(\sigma_{1}^{2}\) varies (i.e. scenario \(k=2\)), the performance of CTree decreases as it has not been enabled through a respective definition of the transformation function \(h(\cdot)\) to detect such variation. Again, the MOB tree performs best, closely followed by CTreeTrafo and DistTree.
However, power estimates do not indicate whether the true subgroups are correctly specified. Therefore, the ARI has also been investigated for the Stump Case and the Tree Case. The average ARI is plotted against increasing sample size in Figure 4 and in Figure 5, respectively. As expected, the ARI increases as the sample size increases in both cases. However, CTree can only keep up with the COAT implementations when there is only variation in the expectation \(\mu_{k}\) and not in the variance \(\sigma_{k}\). COAT seems to be able to cope even with the more complex setting when there are more than two true subgroups. Overall, the results for estimated power and ARI are largely comparable and lead to identical conclusions regarding the performance of the modeling approaches. An example of a tree case is given in Figure 6.
Figure 2: Partitions of \(X\) used to define the subgroups in the simulation studies.
Figure 3: Relative frequency of statistically significant p-values observed in the root nodes of the COAT models fit to data of increasing sample size in the Null Case with 10000 replications. These estimates of the type-I error probability are presented with pointwise \(95\%\) confidence intervals (dashed lines).
Figure 6: COAT by CTreeTrafo for conditional agreement of simulated data in the Tree Case scenario \(k=1\).
Figure 4: Power estimates (4a, 4c, 4e) and Adjusted Rand Index (ARI) (4b, 4d, 4f) for CTree, CTreeTrafo, DistTree and MOB in the three Stump Case scenarios \(k\in\{1,2,3\}\), for increasing sample size. The maximum width of the pointwise \(95\%\) confidence intervals was only \(1.97\%\), which is why they are not presented in the plots.
## 4 Application example
To demonstrate the applicability and relevance of COAT in research, it is applied to a real data example of \(50\) study participants who wore different accelerometers, namely one ActiGraph and two Actiheart, simultaneously for \(24\) hours[20]. The ActiGraph was placed on their right hip, one Actiheart was placed in the upper position of chest, and the second Actiheart in the lower position. Both accelerometers are considered valid for estimating activity energy expenditure (AEE). The difference is that the Actiheart reports it directly, while the ActiGraph uses both uniaxial and triaxial activity counts to calculate it. In the present application example, the agreement of daily measurements of AEE (kilocalories) is compared between different pairs of two accelerometers each, conditional on the participants' age, sex, height, and weight. As described in Section 2.2, we also include the mean AEE measurements along with the other covariates as a potential explanatory variable. Two cases with missing values were removed from the data. Characteristics of the participants are presented in Table 2.
Figure 6(a) shows that for one pair of compared accelerometers, COAT by MOB is able to identify subgroups of participants, which are heterogeneous regarding the bias and width of LoA depending on age (\(p=0.034\)). Better agreement, in terms of bias decreasing about \(324\) kilocalories, is obtained for patients older than \(41\) years. With two other accelerometers, COAT by CTreeTrafo showed that agreement may be conditional on the magnitude of measurements (Figure 6(b)). With an average AEE \(>1040\) kilocalories, the bias in agreement increases by about \(220\) kilocalories and the width of the LoA increases by about \(444\) kilocalories (\(p=0.006\)).
## 5 A Bland-Altman test
It has been proposed in Section 2.2 to apply COAT to perform a two-sample 'Bland-Altman test' of the null-hypotheses (1) and (2) for comparison of agreement between (pre)defined subgroups. For example, in the application example of the previous Section 4, a researcher may be interested in a potential difference of agreement between the sexes. Figure 8 shows the result of COAT by CTree, when a stump tree is generated for sex as the only covariate. In this implementation of COAT, the \(\chi^{2}\) test statistics \(c_{quad}\), degrees of freedom and respective p-values (cf. Section 2.3.1) are presented for testing the null-hypothesis (1) and each of the null-hypotheses (2) concerning differences in bias and width of LoA between the considered subgroups. Corresponding estimates of \(\mathbb{E}(Y|X)\) and \(\mathrm{Var}(\mathrm{Y|X})\) are provided for each subgroup, too. In the present case, no statistically significant difference was found between the sexes in terms of bias (\(p=0.619\)), the width of the LoA (\(p=0.366\)) and both of these quantities (\(p=0.649\)). Please note that these
\begin{table}
\begin{tabular}{l l} \hline Variables & n(\%); Median (IQR) \\ \hline Female & \(24\)\((50\%)\) \\ Age (years) & \(40\)\((35,57)\) \\ Height (cm) & \(174\)\((166,182)\) \\ Weight (kg) & \(75\)\((63,86)\) \\ \hline \end{tabular}
\end{table}
Table 2: Participant characteristics of the application study (\(n=48\)).
Figure 5: Adjusted Rand Index (ARI) of CTree, CTreeTrafo, DistTree and MOB in the two Tree Case scenarios \(k\in\{1,2\}\), for increasing sample size. The maximum width of the pointwise \(95\%\) confidence intervals was only \(0.96\%\), which is why they are not presented in the plots.
Figure 7: COAT for conditional agreement of activity energy expenditure (AEE) measurements of two accelerometers. Note that different pairs of accelerometers are compared in (7a, ActiGraph based on triaxial activity counts and Actiheart in upper position) and (7b, ActiGraph based on triaxial activity counts and Actiheart in lower position). See Section 4 for details.
three p-values are not adjusted for the multiple testing problem, but are easily suitable for conducting a sequential test procedure (starting with the test of both quantities, followed by the test of the individual quantities). Other corrections, such as the Bonferroni correction, are of course also possible. Similarly to the proposed approach, a Bland-Altman test could also be used for a single predictor variable of any scale, for example if there is no definition of two subgroups. However, this case is already covered by COAT, as described above.
## 6 Discussion
The contribution of the present work to the field of method comparison studies is fourfold. First, the concept of conditional method agreement is introduced and formalized. Second, respective statistical modeling by recursive partitioning is proposed introducing conditional method agreement trees (COAT). Third, a respective Bland-Altman test is suggested to test for differences in agreement, with respect to the bias and width of LoA, between (pre)defined subgroups. Fourth, COAT is made publicly available through the R package coat.
COAT provides a solution to simultaneously address the research questions of method agreement and potential dependence on covariates in a unifying framework. It therefore exploits the fact that conditional method agreement can be parameterized through the expectation \(\mathbb{E}(Y|X)\) and variance \(\mathrm{Var}(Y|X)\) of paired differences between two methods' measurements. Correctly specified tree-based models are used for estimation of these conditional parameters and enable the definition of subgroups with different agreement.
Results of the simulation study indicate that the implementations of COAT by CTree (i.e. CTreeTrafo) and Dist-Tree are able to control the type-I error probability at the nominal significance level, independent of sample size. By contrast, the implementation by MOB showed a decisively decreased error rate with small sample sizes and a slightly increased error rate with larger sample sizes. Therefore, it cannot be recommended for COAT in its present form, and further research could be directed towards robust variance estimation and improvements in distributional approx
Figure 8: Bland-Altman (BA) test of difference in method agreement of activity energy expenditure (AEE) measurements between female (F) and male (M) participants in the application study. ActiGraph based on uniaxial activity counts and Actiheart in the upper position are compared.
imations for possible correction. All implementations of COAT performed well in detecting existent subgroups with increasing sample size. The comparison to the default specification of the CTree algorithm shows that CTree without the proposed transformation only captures differences in the bias, that is in the conditional expectation \(\mathbb{E}(Y|X)\), but cannot uncover differences in the width of LoA, that is in the variance \(\mathrm{Var}(\mathrm{Y}|\mathrm{X})\).
Observed differences between the implementations of COAT arise from the testing strategy. Both CTreeTrafo and DistTree compute quadratic test statistics which are equivalent, as has been analytically shown in appendix A.1. In this respect, DistTree can be considered a special case of CTree with the appropriate transformation function \(h(\cdot)\) as defined in Section 2.3.1. By contrast, MOB is based on fluctuation tests for parameter instability in regression model fits.
The application study exemplifies the potential of COAT for medical research. Therefore, subgroups with heterogeneous method agreement in activity energy expenditure (AEE) measurements could be identified in terms of bias and width of LoA depending on covariates and the size of the AEE measurements. From the perspective of the applicant, which could be a manufacturer of accelerometers, a researcher, investigator or treating physician, one can then recommend which accelerometer to use or how to improve measurements in a particular setting for a particular person.
It should be noted that the results of COAT are exploratory, unless it is used to conduct a two-sample Bland-Altman test of different agreement between (pre)defined subgroups. In the latter case it can be used for confirmatory hypothesis testing. In this context, it should also be mentioned that CTree, DistTree and MOB by default apply a Bonferroni correction to the multiple testing problem that occurs when a test-based splitting is performed based on multiple covariates. At present, COAT is limited to the case of single measurements per observational unit or subject. A modification for repeated measurements is currently being developed.
## 7 Conclusion
COAT enables the analysis of method agreement in dependence of covariates and mean measurements by conditional modeling and exploratory or confirmatory hypothesis testing. It is made publicly available through the R package coat.
## Data availability
Data underlying the application study may be obtained from the authors of the original study upon reasonable request [20]. The code of the simulation studies is provided as supplementary material. COAT is made publicly available through the associated R package coat on the Comprehensive R Archive Network (CRAN).
## Conflicts of interest
None to declare.
## Funding
This study was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Projektnummer (grant number) \(447467169\).
## Author contributions
S.K. and A.H. drafted the manuscript, performed the statistical analyses and interpreted the results. Andre H. extracted and prepared the data used in the application study. All authors revised the manuscript for its content and approved the submission of the final manuscript.
## Acknowledgements
We thank Alexander Horsch for fruitful discussions and recommendations.
## Appendix R Code
R code of the performed simulation and application studies is provided as supplementary material. The associated R package coat is available from GitHub.
| 方法比較研究は、2以上の方法で測定された値の acuerdo を調査しています。通常、相違は、既定のBland-Altman 分析によって評価されます。しかし、その前提は、観測単位すべておよび適用設定すべてにおいて測定値の差が同一分布であることです。私たちは、条件付き方法相違概念を導入し、この制約を緩和するためのモデルアプローチを提案しました。したがって、Bland-Altman 分析は、再帰分割フレームワークに埋め込まれており、共variate に依存して異質性の高い相違を持つサブグループを明確に定義するのに使用されています。3種類のモデルアプローチ、条件付き推論木(CTreeTrafo)、分布推論木(DistTree)とモデルベースの木(MOB)が検討されています。これらのモデルのパフォーマンスは、いくつかのシミュレーションで型Iエラー確率と、そのモデルの特性を評価するのに使用されます。さらに、調整されたラン |
2301.11942 | The Origin of Stars in the Inner 500 Parsecs in TNG50 Galaxies | We investigate the origin of stars in the innermost $500\,\mathrm{pc}$ of
galaxies spanning stellar masses of $5\times10^{8-12}\,\mathrm{M}_{\odot}$ at
$\mathrm{z=0}$ using the cosmological magnetohydrodynamical TNG50 simulation.
Three different origins of stars comprise galactic centers: 1) in-situ (born in
the center), 2) migrated (born elsewhere in the galaxy and ultimately moved to
the center), 3) ex-situ (accreted from other galaxies). In-situ and migrated
stars dominate the central stellar mass budget on average with 73% and 23%
respectively. The ex-situ fraction rises above 1% for galaxies
$\gtrsim10^{11}\,\mathrm{M}_{\odot}$. Yet, only 9% of all galaxies exhibit no
ex-situ stars in their centers and the scatter of ex-situ mass is significant
($4-6\,\mathrm{dex}$). Migrated stars predominantly originate closely from the
center ($1-2\,\mathrm{kpc}$), but if they travelled together in clumps
distances reach $\sim10\,\mathrm{kpc}$. Central and satellite galaxies possess
similar amounts and origins of central stars. Star forming galaxies
($\gtrsim10^{10}\,\mathrm{M}_{\odot}$) have on average more ex-situ mass in
their centers than quenched ones. We predict readily observable stellar
population and dynamical properties: 1) migrated stars are distinctly young
($\sim2\,\mathrm{Gyr}$) and rotationally supported, especially for Milky Way
mass galaxies, 2) in-situ stars are most metal-rich and older than migrated
stars, 3) ex-situ stars are on random motion dominated orbits and typically the
oldest, most metal-poor and $\alpha$-enhanced population. We demonstrate that
the interaction history with other galaxies leads to diverse pathways of
building up galaxy centers in a $\Lambda$CDM universe. Our work highlights the
necessity for cosmological context in formation scenarios of central galactic
components and the potential to use galaxy centers as tracers of overall galaxy
assembly. | Alina Boecker, Nadine Neumayer, Annalisa Pillepich, Neige Frankel, Rahul Ramesh, Ryan Leaman, Lars Hernquist | 2023-01-27T19:00:01 | http://arxiv.org/abs/2301.11942v1 | # The Origin of Stars in the Inner 500 Parsecs in TNG50 Galaxies
###### Abstract
We investigate the origin of stars in the innermost 500 pc of galaxies spanning stellar masses of \(5\times 10^{8-12}\) M\({}_{\odot}\) at z = 0 using the cosmological magnetohydrodynamical TNG50 simulation. Three different origins of stars comprise galactic centers: 1) in-situ (born in the center), 2) migrated (born elsewhere in the galaxy and ultimately moved to the center), 3) ex-situ (accreted from other galaxies). In-situ and migrated stars dominate the central stellar mass budget on average with 73% and 23% respectively. The ex-situ fraction rises above 1% for galaxies \(\gtrsim 10^{11}\) M\({}_{\odot}\). Yet, only 9% of all galaxies exhibit in ex-situ stars in their centers and the scatter of ex-situ mass is significant (\(4-6\) dex). Migrated stars predominantly originate closely from the center (\(1-2\) kpc), but if they travelled together in clumps distances reach \(\sim 10\) kpc. Central and satellite galaxies possess similar amounts and origins of central stars. Star forming galaxies (\(\gtrsim 10^{10}\) M\({}_{\odot}\)) have on average more ex-situ mass in their centers than quenched ones. We predict readily observable stellar population and dynamical properties: 1) migrated stars are distinctly young (\(\sim 2\) Gyr) and rotationally supported, especially for Milky Way mass galaxies, 2) in-situ stars are most metal-rich and older than migrated stars, 3) ex-situ stars are on random motion dominated orbits and typically the oldest, most metal-poor and \(\alpha\)-enhanced population. We demonstrate that the interaction history with other galaxies leads to diverse pathways of building up galaxy centers in a \(\Lambda\)CDM universe. Our work highlights the necessity for cosmological context in formation scenarios of central galactic components and the potential to use galaxy centers as tracers of overall galaxy assembly.
keywords: methods: numerical - galaxies: formation - galaxies: evolution - galaxies: stellar content - galaxies: structure - galaxies: nuclei - galaxies: bulges
## 1 Introduction
The center of a galaxy depicts its brightest and densest region. Thus observations of galaxy centers provide us with the highest data quality, which should enable us to make the most precise predictions about their formation. On the other hand, being also the deepest point of the potential well, the center witnessed the galaxy's overall stellar assembly from the earliest cosmic times onward, as understood from the inside-out formation scenario of galaxies within a \(\Lambda\)CDM (Lambda-Cold-Dark-Matter) Universe. Therefore, many transformative processes of galaxy evolution influence a galaxy's center until the present day, which need to be taken into account to uniquely interpret even the highest quality observations.
As a consequence, a variety of central stellar structures are found in galaxies. Decreasing in size from the order of one kpc to sub parsec scales, these range from bars and (pseudo)bulges (see e.g. Laurikainen et al., 2016; Kormendy and Kennicutt, 2004, for a summary), which can include other structures such as nuclear rings and disks, to nuclear star clusters (NSCs; see e.g. Neumayer et al., 2020, for a summary) and supermassive black holes (SMBHs; see e.g. Kormendy and Ho, 2013, for a summary). Some galaxies may exhibit more than one of these components or none at all. Many of these components possess scaling relations of their structural parameters, such as the Seriex (1968) index and effective radius of bulges (e.g. Gadotti, 2009; Fisher and Drory, 2010) and the luminosity/mass-size relation of NSCs (e.g. Boker et al., 2004; Cote et al., 2006; Georgiev and Boker, 2014), as well as scaling relations with each other, such as the bulge-SMBH-mass (e.g. Haring and Rix, 2004; Sani et al., 2011; Lasker et al., 2016) and NSC-SMBH-mass relations (e.g. Ferrarese et al., 2006; Georgiev et al., 2016), which also scale with the stellar mass of their underlying host galaxy (e.g. Scott and Graham, 2013; Reines and Volonteri, 2015; Sathez-Jansen et al., 2019). Some of these scaling relations can differ for early-type and late-type galaxies, or depend on the bulge type or the presence of a bar (e.g. Gadotti and Kauffmann, 2009; Georgiev et al., 2016; Davis et al., 2019; Sahu et al., 2019).
As diverse as the structural properties of central components are, so are the formation scenarios trying to explain them. Broadly speaking,
all of these formation scenarios can be divided into internal and external processes. For example, bulges are thought to form from merger events (e.g. Hopkins et al., 2009, 2010), from rapid early-on star formation (e.g. Guedes et al., 2013; Okamoto, 2013), from secular evolution (e.g. Kormendy and Kennicutt, 2004; Athanassoula, 2005) or from the migration of clumps formed in the disk at high redshift (e.g. Elmegreen et al., 2009; Dekel et al., 2009); bars form through disk instabilities either in isolation (e.g. Bottema, 2003; Athanassoula et al., 2013) or in a cosmological context (e.g. Romano-Diaz et al., 2008; Kraljic et al., 2012; Peschken and Loksa, 2019); nuclear star clusters are thought to form through either star formation (e.g. Maciejewski, 2004; Aharon and Perets, 2015) or through the migration and successive merging of globular clusters in the center (e.g. Hartmann et al., 2011; Agarwal and Milosavljevic, 2011); SMBHs can grow by accreting gas and by merging with other SMBHs (e.g. Croton et al., 2006; Malbon et al., 2007; Fanidakis et al., 2011; Lapiner et al., 2021). In many cases, the formation of any one component will also influence the others. For example, once a bar is formed it can re-arrange the orbits of stars causing radial migration, or it can efficiently funnel gas to the center, which can trigger star formation in the center and also feed the SMBH. In turn, the AGN (active galactic nucleus) feedback caused by the SMBH will then influence the gas supply and hence truncate the formation of stars. Thus, it is important to also understand the interplay between the presence and formation of several central components.
Observationally, we can only indirectly deduce constraints on any of these formation scenarios from the stellar population and dynamical properties of a galaxy's central structure(s). For external galaxies, such necessary measurements are only possible with integral field units (IFUs) that provide spatially resolved stellar population and kinematical maps (e.g. Gadotti et al., 2020; Bittner et al., 2020). While major progress has been made in producing these maps with increasing quality, it is still difficult to disentangle stars from centrally overlapping galaxy components due to the line-of-sight integration - let alone identify stars of different origins within \(a\) given central component. This is possibly further complicated by the fact that stars with properties characteristic of one formation scenario might be subdominant in luminosity or mass compared to the bulk stellar population.
Even in the Milky Way, it has only become evident fairly recently that all major central components contain metal-poor subpopulations of stars that also exhibit different kinematics. For the Galactic bulge (see e.g. Barbuy et al., 2018, for a summary) there is a smooth transition from rotation to dispersion dominated kinematics for stars decreasing from (super-)solar metallicity all the way to the lowest metallicities (\(\rm[Fe/H]<-2.0\,dex\)(Ness et al., 2013; Zoccali et al., 2017; Arentsen et al., 2020). To a lesser extent this decrease is also seen for the nuclear stellar disk (Schultheiss et al., 2021) with additional evidence of recent star formation activity (\(<1\) Gyr) on top of the overall old bulk population (\(>8\) Gyr) (Nogueras-Lara et al., 2020, 2021). The nuclear star cluster, which hosts the most metal-rich stars in the Milky Way, also has a subpopulation of sub-solar metallicity stars, which show an asymmetric spatial distribution and a higher degree of rotation (Feldmeier-Krause et al., 2020; Do et al., 2020).
Generally, signs of young, metal-rich and kinematically cold stars in these central structures such as bulges and NSCs, are associated with being formed in-situ from gas infall, while old, metal-poor and dispersion dominated systems are thought to originate from merger processes. However, stars formed in-situ at the beginning of a galaxy's lifetime are also metal-poor and might as well become dispersion dominated over time through various processes, such as resonances created by the bar. Therefore, even though observed properties of stars in the centers of galaxies at as a fossil record of their origin, we need simulations to disentangle which (combinations of) formation scenarios are able to predict those observations.
Cosmological, hydrodynamical galaxy simulations (see e.g. Somerville and Dave, 2015; Vogelsberger et al., 2020, for a summary) are ideal to study the complex formation pathways of galaxy centers as they encompass the most complete conglomeration of galaxy formation processes in a \(\Lambda\)CDM framework, thus capturing internal and external formation processes alike. The most recent simulations are able to produce a realistic, diverse population of galaxies (see e.g. Vogelsberger et al., 2014; Nelson et al., 2019, and references therein for Illustris/TNG specifically). Typically, large simulation boxes are used to study global galaxy properties across an array of different galaxies (e.g. Illustris: Genel et al., 2014; Vogelsberger et al., 2014, 2014; EAGLE: Schaye et al., 2015; Crain et al., 2015; Horizon-AGN: Dubois et al., 2014, 2016; Magnetism: Hirschmann et al., 2014; Teklu et al., 2015; Bocquet et al., 2016; IllustrisTNG: Weinberger et al., 2017; Pillepich et al., 2018; SIMBA: Dave et al., 2019), while zoom-in (re)simulations focus on internal galaxy structures and dynamics (e.g. ERIS: Guedes et al., 2011; NIHAO: Wang et al., 2015; Latte: Wetzel et al., 2016; Auriga: Grand et al., 2017; FIRE-2: Hopkins et al., 2018; NIHAO-UHD: Buck et al., 2020). To understand the mass build-up of galaxy centres we need the advantages of both: a big enough box to probe many different assembly histories and thus galaxy demographics, and a zoom-in like resolution to focus on the center of galaxies and capture internal dynamical processes.
We therefore focus our analysis on the origin of stars in the central few hundred parsecs of galaxies in TNG50 (Pillepich et al., 2019; Nelson et al., 2019) from the IllustrisTNG simulations. The \(51.7^{3}\)\(\rm{\,cMpc^{3}}\) volume captures two \(10^{14}\)\(\rm{M_{\odot}}\) halos and hundreds of Milky Way like galaxies, whereas the spatial resolution provides hundreds to tens of thousands stellar particles inside the central \(500\,\rm{pc}\) for a four dex range in galaxy stellar mass. Importantly, TNG50 starts to capture the diversity of central components, such as low and high Sersic index bulges in Milky Way like galaxies (Gargiulo et al., 2021), and performs well in a statistical comparison of simulated and observed bar properties (Rossa-Guevara et al., 2021; Frankel et al., 2022); both which were previously not possible with zoom-in simulations. Hence, TNG50 offers the unique opportunity to study the contribution of stars with different (internal or external) origins to the formation of the galaxy center across diverse galaxy formation pathways and demographics, while predicting the observable imprint that the different formation scenarios impose on the stars in a galaxy's center.
The goal of this study is to appeal to different scientific communities that focus on various central stellar structures of the Milky Way and external galaxies to provide an understanding where the most central stars of galaxies originate across a wide range of galaxy masses inside the TNG modelling framework. Specifically, we also study, for the first time, stars that have migrated towards the center to address formation scenarios of central structures that include the necessity for these processes such as NSC formation. Even though NSCs are not explicitly resolved in TNG50, we hope to offer new incentives for simulations (Antonini et al., 2012; Perets and Mastrobuono-Battisti, 2014; Guillard et al., 2016) and (semi-)analytical models (Antonini, 2013; Antonini et al., 2015; Leaman and van de Ven, 2021) that are tailored towards NSC formation channels. Lastly, we aim to demonstrate that there are possibilities to use the bright centers of galaxies as a tracer of the galaxy's overall assembly history with readily available observables from current surveys such as SDSS (e.g. Gallazzi et al., 2021).
This paper is organized as follows. In Section 2 we briefly describe the TNG50 simulation and the definition of properties of galaxies
and stars (i.e. stellar particles) that we will analyze at \(\mathrm{z}=0\). We also provide a detailed description and verification of selecting stars belonging to a galaxy's center and our galaxy sample selection. In Section 3 we present the three different possible origins for stars residing in a galaxy's center and discuss their birth locations. In Section 4 we show the results of the different contributions of central stars of different origins across different galaxy population demographics and their observable stellar population and dynamical properties at \(\mathrm{z}=0\). In Section 5, we discuss our findings and implications from TNG50 on the central mass assembly of galaxies in a cosmological context. We also provide outlooks in the context of the formation of central galaxy components as well as the assembly of the overall host galaxy tailored towards measurements of extragalactic observations. Finally, we conclude our study in Section 6.
## 2 Tools and Methods
We briefly introduce the TNG50 simulation below as well as the properties of TNG50 galaxies and their stars (Section 2.3). We then describe in Section 2.4 how we define stellar particles that belong to a galaxy's center.
### The TNG50 simulation
In this work we primarily study galaxies in TNG50 (Pillepich et al., 2019; Nelson et al., 2019), which is the highest resolution installment of the IllustrisTNG (Illustris _The Next Generation_) (Pillepich et al., 2018; Springel et al., 2018; Nelson et al., 2018; Naiman et al., 2018; Marinacci et al., 2018) suite of cosmological, magnetohydrodynamical simulations1. It provides unprecedented zoom-in like resolution within a representative cosmological volume with a box of 51.7 Mpc on each side.
Footnote 1: IllustrisTNG also encompasses two larger volume runs, namely TNG100 and TNG300 with subsequently coarser resolution.
The simulation was performed with the Arepo code (Springel, 2010; Pakmor et al., 2011; Pakmor and Springel, 2013; Pakmor et al., 2016), which employs a finite-volume method on a moving-mesh to solve the equations of magnetohydrodynamics coupled with a tree-particle-mesh method for self-gravity. TNG50(-1) has a mass resolution of \(4.6\times 10^{5}\,\mathrm{M}_{\odot}\) for dark matter and \(8.4\times 10^{4}\,\mathrm{M}_{\odot}\) for baryonic particles. The softening length is 288 cpc for collisionless particles for \(\mathrm{z}\leq 1\) and 576 cpc for \(\mathrm{z}>1\), whereas the softening length of the gas particles is adaptive depending on the local cell size of the moving mesh with a floor value of 74 cpc. TNG50 is accompanied by three additional simulation runs (-2,-3,-4) that decrease the spatial resolution each time by half. The initial conditions are set according to cosmological parameters measured by Planck Collaboration et al. (2016).
Additionally, the TNG simulations implement a list of physical subgrid models, which describe galaxy formation and evolution, such as stellar formation and feedback, chemical enrichment, galactic winds, supermassive black hole growth and feedback. Details can be found in Weinberger et al. (2017); Pillepich et al. (2018).
Importantly, the TNG framework successfully reproduces key observational results such as the galaxy stellar mass function up until \(\mathrm{z}<4\)(Pillepich et al., 2018), bi-modality in galaxy color distribution (Nelson et al., 2018), the fraction of quiescent galaxies (Donnari et al., 2019, 2021), scaling relations, such as the galaxy mass-size relation (Genel et al., 2018), the gas-phase mass-metallicity relation (Torrey et al., 2019) and certain element abundances (Naiman et al., 2018), as well as the clustering of galaxies (Springel et al., 2018) and magnetic fields of massive halos (Marinacci et al., 2018). Specifically, the resolution of TNG50 allows for the study of internal dynamics and structures of galaxies (Pillepich et al., 2019) as well as the influence of stellar and black-hole driven outflows on galaxy evolution (Nelson et al., 2019).
Results from the TNG simulation are output in 100 snapshots ranging from \(\mathrm{z}=20\) until today with an approximate time step of 150 Myr since \(\mathrm{z}=4\). For each snapshot dark matter halos are identified by the friends-of-friends (FoF) algorithm (Davis et al., 1985) with a linking length of 0.2, with baryonic particles being attached to the same FoF group based on their nearest dark matter particle. Substructures within these halos, i.e. subhalos, are found through the Sufindim algorithm (Springel et al., 2001), which is run on both dark matter and baryonic particles. Totrack the mass assembly of subhalos/galaxies through cosmic time, merger trees are constructed based on the Sulink algorithm (Rodriguez-Gomez et al., 2015). The merger trees were constructed twice, once based on dark matter and once based on baryonic matter alone.
The entire simulations' particle information for the 100 snapshots, the halo and subhalo catalogues, merger trees as well as many more additional supplementary data catalogues are made publicly available on the TNG website2(see also Nelson et al., 2019, for the public data release).
Footnote 2: [https://www.tmg-project.org](https://www.tmg-project.org)
### General note on calculations
Unless otherwise stated we employ the following definitions in our subsequent calculations and plots. To center the coordinate system on a galaxy of interest we choose the position of the particle (of any type) with the minimum gravitational potential energy as the galaxy's center, as given by SubhaloPos in the subhalo catalogue. For the systemic velocity of a galaxy we use the median velocity of the 5% most bound stellar particles. For face-on or edge-on projections, galaxies are oriented such that the z-axis is aligned with the total angular momentum of stellar particles within twice the stellar half mass radius. To track back galaxies in time we exclusively use the merger trees based on following baryonic particles ('Sulink_gal'). Plots that display summary statistics of galaxy populations use a running median with a bin size of 0.25-0.3 dex, which is adapted, if necessary, to ensure a minimum number of ten galaxies per bin. Furthermore, all displayed quantities are in physical units and all provided SubfindIDs refer to galaxies at \(\mathrm{z}=0\).
Throughout this study the terms in-situ, migrated and ex-situ always refer to stars within the central 500 pc of galaxies unless otherwise stated.
### Galaxy characteristics and properties of their stars
Throughout this study we are interested in two sets of demographics: 1) How does the central mass assembly of galaxies change as a function of a galaxy's overall bulk properties?, 2) How do the intrinsic properties of stars in the center of galaxies differ for different origins?
To address the first question we do not only study the central 500 pc of galaxies as a function of the galaxy's total stellar (dynamical) mass, but we also divide our galaxy sample into different types of galaxies characterized at \(\mathrm{z}=0\). To address the second question we study individual properties of stars (i.e. stellar particles) in the
center of galaxies at z = 0. These investigated characteristics are briefly summarized in Table 1, whereas a detailed description on their calculations can be found in Appendix A.
### Defining stars belonging to a galaxy's center
The most straightforward way to define a galaxy's center at z = 0 is to select all stellar particles within a 3D spherical aperture with a given radius \(r_{\rm cut}\) around its center. This simple selection will give us knowledge about stellar particles that have an _instantaneous_ radius smaller than the selected aperture. However, as we are interested in the mass assembly of the center of galaxies, we want to make sure that selected particles roughly stay inside the spherical aperture over their orbital time at z = 0. This ensures that we track particles that changed their orbit, should they have migrated to the center, and not particles that are just on more eccentric orbits.
To estimate, whether the particles are on such orbits confined to the center at z = 0, we calculate the specific energy \(E_{\rm cut}\) a particle on a circular orbit with guiding radius \(r_{\rm cut}\) would have, i.e.:
\[E_{\rm cut}=\frac{v_{\rm circ}(r_{\rm cut})^{2}}{2}+\Phi(r_{\rm cut}). \tag{1}\]
The circular velocity \(v_{\rm circ}\) is calculated from the spherically enclosed mass (stellar, gas and dark matter particles) \(v_{\rm circ}(r)^{2}=\frac{GM(<r)}{r}\), whereas the gravitational potential energy \(\Phi\) is given by the simulation and interpolated to \(r_{\rm cut}\). Stellar particles with total energies \(E=\frac{|\rm r|^{2}}{2}+\Phi(\mathbf{x})\) less than \(E_{\rm cut}\) should roughly be confined on orbits that are within the spherical volume with radius \(r_{\rm cut}\), whereas particles with higher energies are able to move to larger radii and hence spend less time in the center.
We additionally enforce that the specific angular momentum in the z-direction \(L_{\rm z}\) of stellar particles in the center lies between \(L_{\rm cut}=\pm v_{\rm circ}r_{\rm cut}\), as we noticed that some lower mass galaxies with stellar masses \(\lesssim 10^{10}\,\mathrm{M}_{\odot}\) had very large \(L_{\rm z}\) and hence large radii with \(E<E_{\rm cut}\), which probably stems from the fact that they are undergoing tidal stripping at present time. If particles with large orbital radii (\(>2\,\mathrm{kpc}\)) still persisted after this cut, we disregend them as well. These additional steps do not significantly affect the amount of central particles selected for galaxies with stellar masses \(\gtrsim 10^{10}\,\mathrm{M}_{\odot}\).
In general, the selection based on Equation 1 is a simplification as it assumes a spherical mass distribution, but it gives a good enough estimate of which particles are truly confined to the center without actually integrating their orbits; see Appendix B1 for validation of this with two galaxies contained in the subbox with higher time cadence. We visualize the difference between a simple selection in radius and the one in energy using Equation 1 in Figure B1.
#### 2.4.1 Choice of the central region
The last step in selecting stellar particles belonging to a galaxy's center is to set a value for \(r_{\rm cut}\), with which we can in turn calculate \(E_{\rm cut}\).
We choose a fixed value of \(500\,\mathrm{pc}^{3}\) for \(r_{\rm cut}\) across all galaxies to avoid running too close into the numerical softening length (see Section 5.4 for further elaboration on this). We explicitly do not choose to adopt a mass-dependent size, as already with a 10% scaling of the mass-size relation of TNG50 galaxies, we are at the softening length of \(10^{10}\,\mathrm{M}_{\odot}\) in stellar mass galaxies, while for the highest mass galaxies we approach \(5\,\mathrm{kpc}\), which we do not deem to be central anymore. We also refer the reader to Section 5.4 and Appendix D for a more detailed discussion and investigation about numerical resolution effects and the choices of \(r_{\rm cut}\).
#### 2.4.2 Galaxy sample selection
Due to the choice of a fixed central aperture of \(500\,\mathrm{pc}\) we have to make some selection for our galaxy sample considered in this analysis.
Generally, sizes of TNG50 galaxies are numerically well converged above a stellar mass of \(\sim 3\times 10^{8}\,\mathrm{M}_{\odot}\)(see Pillepich et al., 2019) at z = 0, but we employ a slightly higher lower mass cut of \(5\times 10^{8}\,\mathrm{M}_{\odot}\) ensuring that the galaxies have a sufficient number of stellar particles for our analysis. We also only consider subhalos/galaxies that are of cosmological origin (i.e. SubhaloFlag is true). Any scaling relations used in this analysis such as the galaxy mass-size or the stellar-mass-black-hole-mass relation (to determine for example if galaxies lie above or below the median at fixed stellar mass) are always computed with respect to this galaxy sample, which contains 4344 galaxies.
Furthermore, for our main analysis of the centers of TNG galaxies,
\begin{table}
\begin{tabular}{l l l l} Property & Short description & Detailed description & Results \\ \hline \hline \multicolumn{4}{l}{**Overall galaxy**} \\ \hline _Mass_ & total stellar or dynamical (i.e. stars+gas+dark) mass & & \\ _Environment_ & central or satellite & \\ _Star formation activity_ & star forming or quenched & \\ _Morphology_ & (kinematics) disk or bulge dominated & Appendix A1 & Section 4.1 \\ _Bar-like feature_ & present or not, based on Fourier decomposition & \\ _AGN feedback_ & above or below average AGN feedback based on mass of the SMBH & \\ _Physical Size_ & compact or extended with respect to the mass-size relation & \\ \hline \hline \multicolumn{4}{l}{**Individual stellar particle**} \\ \hline _Age_ [Gyr] & the lookback time when the star was born & & \\ _Metallicity_ [\(\log_{10}Z/Z_{\odot}\)] & the total amount of metals & & \\ _[Mg/Fe]_ [dex] & the abundance of magnesium as a proxy for \(\alpha\)-elements & Appendix A2 & Section 4.2 \\ _Circularity_ \(\epsilon\) & indicates the type of orbit the star is on & \\ \end{tabular}
\end{table}
Table 1: Properties of galaxies and their individual stars (stellar particles) in TNG50 at z = 0 investigated in this study. A detailed description on the exact calculation of the properties as well as the results with respect to the centers of galaxies are found in the indicated sections.
we enforce that the ratio of the 3D stellar half mass radius \(\mathrm{R}_{1/2}\) and the central aperture (\(\mathrm{r}_{\mathrm{cut}}=500\,\mathrm{pc}\)) is greater than four, i.e. \(\mathrm{R}_{1/2}\geq 2\,\mathrm{kpc}\). Otherwise, galaxies are too compact for our selected central aperture and about half of the entire galaxy will be classified as "central". Additionally, we make sure that at least a hundred stellar particles are within the central 500 pc according to Section 2.4, otherwise the galaxy is disregarded.
Our final galaxy sample selection yields 2531 TNG50 galaxies and their masses and sizes are visualized in Figure 1. The data points are color-coded by the percentage of stars inside the central 500 pc compared to the total amounts of stars. The color trend is neither uniform in the direction of increasing stellar mass nor size. This hints at different density profiles for galaxies across their stellar mass-size plane. We note that our subsequent results do not show any strong differential trends, even though our constrain of \(\mathrm{R}_{1/2}\geq 2\,\mathrm{kpc}\) disregards half the galaxies with stellar masses \(<10^{9.5}\,\mathrm{M}_{\odot}\) (see Section 5.4 for a further discussion).
In Figure 2 we show the cumulative number of stellar particles as a function of their instantaneous radius at \(\mathrm{z}=0\) for our galaxy sample. The average number of stellar particles in the center, i.e. within 500 pc, is around \(10^{3}\) for galaxy stellar masses between \(5\times 10^{8}\,\mathrm{M}_{\odot}\) and \(5\times 10^{9}\,\mathrm{M}_{\odot}\) and increases towards \(10^{5}\) for the highest mass bin4. Hence, our choice for \(r_{\mathrm{cut}}\) ensures that we have enough stellar particles in the center to reliably study their properties. We can also observe a turn-over in the stellar particle number profile at radii around \(0.5-1\,\mathrm{kpc}\) confirming that we are indeed probing the densest (central) region of TNG50 galaxies.
Footnote 4: A synonymous measure can be achieved with the StellarHsml field, which gives an approximation for the spatial extent a single stellar particle samples from the underlying stellar density field. The spherical radius of stars within 500 pc ranges between \(10-100\,\mathrm{pc}\) for the highest to lowest mass galaxies respectively.
## 3 The different origins of stars in the center of TNG50 galaxies
After selecting stars in the center of TNG50 galaxies at \(\mathrm{z}=0\), we investigate their different origins. We find three general populations of stars in the central region of galaxies, which we describe in Section 3.1 in detail. We also present the distribution of their birth origin for stacks in galaxy stellar mass in Section 3.2.
### Definition of different origins
We define the following different origins of stars in the center of TNG50 galaxies at \(\mathrm{z}=0\):
* _in-situ_: stars were born inside the host galaxy's center and are still found there at \(\mathrm{z}=0\).
* _migrated_: stars were born gravitationally bound inside the host galaxy but outside its center. At \(\mathrm{z}=0\) they reside in the host galaxy's center.
* _ex-situ_: stars were born inside other galaxies, which merged with the host and are ultimately found inside the host's center at \(\mathrm{z}=0\).
Figure 1: **Sample selection of TNG50 galaxies at \(\mathrm{z}=0\)**. Stellar mass-size relation of TNG50 galaxies at \(\mathrm{z}=0\) considered in this analysis colored coded according to their percentage of stars in the center relative to their total number of stellar particles. We employ a lower total stellar mass cut of \(5\times 10^{8}\,\mathrm{M}_{\odot}\) leaving 4344 Galaxies. We additionally impose a minimum 3D stellar half mass radius \(\mathrm{R}_{1/2}\) of 2 kpc (i.e., \(\mathrm{R}_{1/2}/\mathrm{r}_{\mathrm{cut}}\geq 4\) with \(\mathrm{r}_{\mathrm{cut}}=500\,\mathrm{pc}\)) resulting in a sample size of 2531 galaxies. Excluded galaxies are shown with grey points and percentages show their number fractions in bins of 0.5 dex. The median stellar mass–size relation of _all_ TNG50 galaxies is shown as the black line.
Figure 2: **Number of stellar particles in the central 500 pc of our TNG50 galaxy sample at \(\mathrm{z}=0\)**. Cumulative number of stellar particles as a function of radius per individual galaxy are shown as thin gray lines. The thick colored lines show the average per galaxy stellar mass bin as depicted by the colorbar. The dashed black line shows our adopted \(r_{\mathrm{cut}}\) value of 500 pc. The average number of stellar particles within \(r_{\mathrm{cut}}\) lies between \(10^{3}\) and \(10^{5}\) for the lowest and highest galaxy mass bins respectively. No individual galaxy in our sample has less than 100 stellar particles in the center.
#### 3.1.1 Born inside or outside the host galaxy
To determine whether a star is born inside a galaxy or was brought in through merger events, we use the stellar assembly catalogue5 produced by methods of Rodriguez-Gomez et al. (2016) for TNG50. This classifies stellar particles that formed along the main progenitor branch of a galaxy, i.e. the galaxy with the most massive history behind it, as in-situ (InSitu = 1) and otherwise as ex-situ (InSitu = 0). The ex-situ stars generally have two possible origins: they either came from galaxies that completely merged with the main galaxy, i.e. they are present in the merger tree of the host, or were stripped from galaxies that do not belong to the host's merger tree, e.g. flybys.
Footnote 5: This particular catalogue has not been publicly released.
Additionally, we treat subhalos/satellites that directly merged onto the main progenitor branch of a galaxy but are flagged as not being of cosmological origin (i.e. SubhaloFlag = 0 in the subhalo catalogue) differently in this study. These subhalos are often formed within another galaxy as e.g. a fragment of the baryonic disk, contain little dark matter and hence are not thought of as galaxies (see also Nelson et al. 2019a, Section 5.2). Because the construction of the stellar assembly catalogue involves the use of merger trees, which only track stellar particles and star-forming gas cells of subhalos, these spurious galaxies are counted as of ex-situ origin. Here, we change their labelling back to in-situ (i.e. their InSitu flag in the stellar assembly catalogue becomes true again) for now (see Section 3.1.3 for the implications of this), because we only consider ex-situ particles coming from true external galaxies. We verify with Figure 11 in Appendix C that this change does not alter the overall _total_ ex-situ stellar mass fraction of TNG50 galaxies significantly. We note that spurious galaxies brought to the main progenitor branch of the host galaxy through prior merging with a real galaxy are continued to be counted as ex-situ.
#### 3.1.2 Born in-situ or migrated to the center
To address whether a stellar particle is born inside the center of the host galaxy or migrated to the center from elsewhere inside the host galaxy, we need to determine its birth radius. A stellar particle with a birth radius smaller than \(\rm{r_{cut}}=500\,pc\) is then consequently born in-situ and otherwise counts as migrated6.
Footnote 6: We here apply a simple cut in the birth radius instead of calculating \(E_{\rm{cut}}\) (i.e. following Section 2.4), as the potential is not recorded for every snapshot.
In TNG, two new fields (BirthPos and BirthVel) for their stellar particles were added. These represent the spatial position and velocity of the star-forming gas cell that generated the stellar particle at its exact time of birth (i.e. GFM_StellarFormationTime). In theory, this provides us with knowledge of the _exact_ birth condition of a stellar particle at the original time step resolution of the simulation; and not only at the output time steps of the snapshots.
Because these quantities are provided in the reference frame of the simulation box, we need to center them on the reference frame of the galaxy of interest. This however becomes an impossible task to do the precision needed for our analysis, as we only know the center position of subhalos at the one hundred output snapshots, but the information of its trajectory in-between is lost. We find that even interpolating the subhalos' position with a higher order spline to the exact birth times of stars can lead to centering offsets of several kpc, especially when there is a merger in process or a pericenter passage around another galaxy (see Figure 12). As we are interested in typical scales of one kpc or less in this study, this problem is severe and will result in a strong bias towards stars being classified as migrated even though they where formed inside our selected spherical aperture.
We therefore define the birth position of stellar particles as the position they have in the snapshot they first appear in. Practically this is done by matching particles at z = 0 to their birth snapshot through their unique ParticleIDs. The caveat of this approach is that the stellar particles have already moved since their exact formation time, which can also lead to a wrong classification of migrated and in-situ stars. However, the error created by this approach is much smaller than the incorrect centering described above (see Figure 13).
We verify this approach by looking at two subhalos that reside in the subboxes of TNG50. The subbox has 3600 snapshot outputs, which makes it possible to track the center position of galaxies across a much finer time resolution of a few Myr. The reader is referred to Appendix B.2 for details.
#### 3.1.3 Clumpy or smoothly migrated
Because we have changed the InSitu flag from the stellar assembly catalogue for spurious galaxies, in Section 3.1.1, we now find two types of migrated stellar particles in the center of galaxies. Stars either travelled individually ('smoothly' migrated) or together in clumps ('clumpy' migrated) to their galaxy's center. Smoothly migrated stars are genuinely born on the main progenitor branch of the subhalo/galaxy in question and the clumpy migrated stars originate from these spurious galaxies, i.e. stellar clumps.
Generally, these clumps are ubiquitous in TNG50 galaxies, about 36% of all galaxies considered in this work have at least one throughout their life time. In stellar or gas surface mass density maps they look like massive star cluster like objects (see Figure 11 for an example) that form within spiral arms or gaseous (disk) fragments during galaxy interactions. However, we want to be extremely cautious here, as it is unclear, if their formation is physical or due to some numerical artifact, even though measures against artificial fragmentation are in place. In fact, their sizes (i.e. 3D stellar half mass radii) lie mostly below the gravitational softening length of TNG50.
Once these clumps are formed, however, their dynamical evolution within the host galaxy is determined by gravity, which we believe is well captured in TNG50 (modulo the softening). Hence, depending on their density and the exerted tidal forces on the clumps, they are either completely disrupted or travel to the center of their host galaxy due to dynamical friction and deposit their stellar particles there. Their typical stellar masses are \(\sim 10^{8}\,\mathrm{M}_{\odot}\). We point the interested reader to Appendix E for more statistics on the clumps and their properties. We provide an extensive discussion on the existence and formation of stellar clumps in simulations and observation in Section 5.3.
For the rest of the paper, we sometimes make the distinction between migrated particles coming from the'smooth' or 'clumpy' migration, if it is explicitly stated. Otherwise, all general references to migrated properties always include both types.
### Birth locations of the central stars
The distributions of birth radii of in-situ, migrated and ex-situ central stars are illustrated in Figure 3 in stacks of galaxy stellar mass.
The in-situ stars are born (by definition) in the center of the host galaxy at z = 0. The peak of the birth radii distribution is around \(200-300\,pc\) for galaxies larger than \(10^{10}\,\mathrm{M}_{\odot}\) and shifts slightly towards larger radii for the lower mass galaxy bins. We also see that higher mass galaxies birth more in-situ stars at all radii and hence are more centrally concentrated (see also Figure 2).
#### 3.2.1 Individually migrated stars originate close to the galaxy's center
Most of the smoothly migrated stellar particles were also born close to the center with radii between 500 pc and 1 kpc, which is partly due to how we have defined them (i.e. purely based on their birth radius) and partly a consequence of the typical density profile of galaxies (i.e. more stars reside in the center of galaxies). For galaxies below \(10^{10}\) M\({}_{\odot}\) the distribution of birth radii declines exponentially reaching the highest values of about 10 kpc. The lowest mass galaxies in our sample (\(\leq 5\times 10^{9}\) M\({}_{\odot}\)) have 11% of smoothly migrated stars, which are born in the range of \(1-10\) kpc, whereas this increases slightly to 17% for the next higher mass bin (\(\leq 10^{10}\) M\({}_{\odot}\)).
For galaxies above \(10^{10}\) M\({}_{\odot}\) we observe a plateau for the distribution of birth radii starting at \(\sim 10\) kpc, which stops at around 30 kpc and 60 kpc for galaxies with \(\leq 5\times 10^{9}\) M\({}_{\odot}\) and \(>5\times 10^{9}\) M\({}_{\odot}\) respectively. The migrated stars originating from these large distances likely come from gas that was stripped during a merger, but which was already attributed to be gravitationally bound to the primary galaxy according to the Sugrind algorithm and hence was counted as being born in-situ to the primary host. The percentage of smoothly migrated stars with birth radii larger than 1 kpc is around 20% and 14% for the two highest stellar mass bins respectively.
#### 3.2.2 Stars migrated in clumps originate from the outskirts of galaxies
The clumpy migrated stars show a distinctively different distribution than the smoothly migrated ones. For galaxies below \(10^{10}\) M\({}_{\odot}\) their contribution is negligible. For galaxies between \(10^{10}\) M\({}_{\odot}\) and \(5\times 10^{10}\) M\({}_{\odot}\) the clumpy migrated stars are only 3% of the total migrated stars, whereas for galaxies above \(5\times 10^{10}\) M\({}_{\odot}\) the contribution rises to almost 50%. Therefore, clump migration is only important for high mass galaxies, where it becomes the dominant driver for contributing migrated stars in the centers (see also Section E).
Furthermore, the peak of the birth radii distribution of clumpy migrated stars is above 10 kpc for the high mass galaxies. This is in agreement with the fact that the gaseous disk of galaxies is much more extended than the stellar one (e.g. Nelson et al., 2012). Stars travelling in stellar clumps are therefore able to migrate to the center of galaxies from much farther distances compared to when they travel individually.
#### 3.2.3 Central ex-situ stars originate from the nuclei of their birth galaxies
Regarding the ex-situ stars, we investigate two different locations: 1) their birth place with respect to their _birth_ host galaxy and 2) the location they were deposited inside their z = 0 _host_ (primary). The latter is defined as the radius the stellar particles have with respect to the primary at stripping time, i.e. the time they last switched galaxies.
We show the distribution of these two quantities also in Figure 3 in the same stacks of galaxy stellar mass, as well as the 2D distribution of all ex-situ stars for these two radii. About half of all ex-situ stars that reside in the center of galaxies at z = 0 exhibit values between 100 pc and 1 kpc for both radii respectively. This means that the ex-situ stars are also born in the center of their respective birth galaxies
Figure 3: **Distribution of birth radii of in-situ, migrated and ex-situ stellar populations in the central 500 pc of TNG50 galaxies at z = 0.**_Left_: Histograms of birth radii of the in-situ, ‘smooth’ migrated and ‘clumpy’ migrated stars colored according to stacks of galaxy stellar mass. The left hand side of the y-axis shows stacked particle mass, whereas the right hand side shows the number of stellar particles. The migrated stars can originate from large radii (\(>10\) kpc) and the smoothly and clumpy migrated stars show different distributions. _Right_: 2D histogram of the birth radius with respect to the host galaxy at birth and the radius with respect to the primary galaxy (i.e. final z = 0 host) at the time of stripping for all central ex-situ stars in our galaxy sample. The contours (from thicker to thinner lines) include 20%, 50%, 90% and 99% of all ex-situ particles. The respective 1D histograms for stacks in galaxy stellar mass are also shown. Most ex-situ stars are born in the central \(\sim 1\) kpc of their birth galaxy and stay together until they are deposited also within the central \(\sim 1\) kpc of their z = 0 host galaxy.
as well as remain in said center until they are deposited right in the center of the primary galaxy during the merger process. Hence, the central, most bound cores of galaxies are more likely to stay together during accretion events until they arrive close to the center of the primary galaxy and ultimately deposit a large quantity of stars there. This is a consequence of mergers preserving the rank order of the particles' binding energy (Barnes, 1988; Hopkins et al., 2009).
We also find two other cases of ex-situ stars, albeit much lower in number. Firstly, TNG predicts a slight excess of ex-situ stars that are born at larger radii (\(1-100\) kpc), but are still deposited close to the primary galaxy at stripping time, i.e. within \(\sim 1\) kpc. These stars represent a second generation of'migrated' stars; or likely in the case of ex-situ stars with birth radii of \(\geq 10\) kpc, stars that were formed from stripped gas during secondary mergers, which only appear for the most massive \(\mathrm{z}=0\) hosts. Consequently, these ex-situ stars were born at large radii in their respective host galaxies (i.e. which will become the secondary galaxy during the merger process onto the \(\mathrm{z}=0\) host), then migrated to the center of said galaxy in order to be deposited close to the center of the primary host during accretion. We confirm this by explicitly checking that their radii are indeed central (\(\lesssim 1\) kpc) with respect to the merging host galaxy one snapshot before the merger coalesces.
The second case represents ex-situ stars that were deposited at larger radii from the primary (\(>1\) kpc), but born within the central 1 kpc of their birth galaxy. Despite being stripped outside the center of the \(\mathrm{z}=0\) host galaxy, these stars still were able to migrate such that they are found in the center of their respective galaxy at \(\mathrm{z}=0\). There is a possibility that these stars were stripped earlier, i.e. before the merger coalesces, but their dynamics were still following the orbit of the galaxy undergoing the merger and hence they could arrive at the center of the final host galaxy.
## 4 The central in-situ, migrated and ex-situ populations across TNG50 galaxies
In this section we present our results of in-situ, migrated and ex-situ populations within the central \(\sim 500\) pc of TNG50 galaxies.
We study their contributions across different galaxy properties (Section 4.1) and examine differences in their stellar population and dynamical properties (Section 4.2).
### Galaxy population demographics
Below we depict the contribution of the central stellar mass of the different origins as an overall trend with galaxy mass (Section 4.1.1), in correlation to each other (Sec 4.1.2) and for different galaxy types (Section 4.1.3).
#### 4.1.1 Galaxy mass trends
In Figure 4 we give an overview of the absolute and relative contribution of the ex-situ, migrated and in-situ population across galaxy masses (both stellar and dynamical) in TNG50.
For all three populations the central stellar mass increases with increasing galaxy mass with the in-situ population dominating at all galaxy masses. Whereas the relation for the in-situ and the smoothly migrated stars have the same shape, the slope for the ex-situ population is steeper. The latter also shows a larger overall scatter due to the stochasticity of merger events contributing stars to the center.
Even though the fractional mass of the ex-situ population in the center is negligible for galaxy stellar masses below \(10^{11}\) M\({}_{\odot}\), there are only 227 (9%) galaxies in our total sample that have no central ex-situ mass, i.e. they do not possess a single stellar particle of ex-situ origin in their central 500 pc, or in other words they possess less than \(\sim 8\times 10^{4}\) M\({}_{\odot}\), the mass of a stellar particle, in ex-situ stars. Above \(10^{11}\) M\({}_{\odot}\) the ex-situ mass becomes of the same order as the in-situ and migrated population, which is a consequence of mergers contributing a significant amount stellar mass to build up of these galaxies. The ex-situ mass reaches about 10% of the total central stellar mass at the highest galaxy masses, albeit with a large scatter of up to 60%.
Around galaxy stellar masses of about \(5\times 10^{10}\) M\({}_{\odot}\), the relation flattens for the in-situ and smoothly migrated stars, with the in-situ population reaching about 4% of the total galaxy stellar mass. Although we have low number statistics of galaxies in this regime within the TNG50 volume (there are 18 galaxies with stellar masses above \(5\times 10^{11}\) M\({}_{\odot}\)), it is reasonable that the in-situ mass goes down, because the ex-situ mass increases in addition to galaxies being quenched by AGN feedback. The consequential increased stochasticity is also seen by the larger scatter in the in-situ and migrated population at the highest galaxy stellar masses7.
Footnote 7: Another possibility for the large scatter at high galaxy stellar masses for the in-situ and migrated central stellar mass could be stars formed from accreted gas, which was brought in by gas-rich mergers. We do not quantify this further as it is beyond the scope of this study.
The contribution of clumpy migrated stars to the overall central migrated population only starts to significantly affect galaxies with stellar masses higher than \(5\times 10^{10}\) M\({}_{\odot}\). For galaxies higher than \(2\times 10^{11}\) M\({}_{\odot}\) the clumps are responsible for roughly quadrupling the mass of migrated stars, or, in fractional terms, increasing the contribution of migrated stars to the total central mass from below 10% to slightly above 20%. Hence, the clumps are important driver to bring in stars from the outskirts of galaxies in TNG50.
Taking into account the entire migrated population ('smooth+clumpy'), we find a contribution of around 20% to the total stellar mass in the center across all TNG50 galaxies. Interestingly, the total central migrated fraction around galaxy stellar masses of \(\sim 10^{10}\) M\({}_{\odot}\) slightly increases, with the 84th percentile reaching almost 40%. We explicitly confirm that this is _not_ due to mixing galaxies with different sizes and hence different total central stellar masses (see also Figure 1).
The statements made so far also apply when correlating the central stellar masses of the three populations with the total dynamical mass of TNG50 galaxies. The larger scatter in all three relations is due to the scatter in the stellar-to-halo mass relation.
#### 4.1.2 The diversity of central stellar mass at fixed galaxy mass
In Figure 5 three correlations between the central ex-situ, migrated and in-situ stellar mass as 2D Gaussian kernel estimates in bins of total galaxy stellar masses are shown. The bins were specifically chosen based on the change of the average migrated fraction as a function of galaxy stellar mass and, in case of the highest mass bin, to ensure enough galaxies per bin to reliably perform the kernel density estimate.
Figure 4: **Central (500 pc) stellar mass of the in-situ, migrated and ex-situ populations of TNG50 galaxies at \(z=0\).** Median trends of central stellar mass (_up panels_) and central stellar mass fraction (_bottom panels_) as a function of the galaxies’ total stellar mass (_left_) and total dynamical mass (_right_) divided in the three origins: in-situ (_pink_), migrated (_orange_) and ex-situ (_blue_). The migrated population is shown for both ‘smooth+clumpy’ (_solid line_) and just ‘smooth’ (_dashed line_) migration (see 3.1.3 for details). Shaded areas show the 16th and 84th percentiles. Overall the in-situ population dominates on average across all galaxy masses with the migrated population contributing around 20% to the total central stellar mass. Only above galaxy masses of \(10^{11}\)M\({}_{\odot}\) the ex-situ population starts to significantly contribute to the central mass build-up.
Figure 5: **2D distributions of the central (500 pc) in-situ, migrated, ex-situ as well as the _total ex-situ stellar mass_ of TNG50 galaxies at \(z=0\). Gaussian kernel density estimates for different combinations of in-situ, migrated and ex-situ mass in the center as well as total ex-situ mass color-coded according to four different total stellar mass bins. The contours show different percentiles encompassing 1%, 20%, 50% and 90% of all data points (_thickest to thinest line_). The black dashed line shows the one-to-one relation in all panels. The correlation between the central mass of the in-situ and migrated population follow the one-to-one relation closely with a fixed offset, whereas the central ex-situ population exhibits a large scatter across the other central populations.**
relation of the central in-situ and migrated mass versus the total stellar galaxy mass is similar.
Nevertheless, for some galaxies the migrated mass is larger than the in-situ mass in the center as seen by the 90% contours for the three highest galaxy mass bins in the range of \(5\times 10^{9}-5\times 10^{12}\) M\({}_{\odot}\). Galaxies in this regime are dominated by clumpy migration. Hence, the mass contributed to the center by clumps can be significant enough to break the otherwise tight one-to-one relation of migrated and in-situ mass.
Lastly, we find for the 90% contour in the highest mass bin for galaxies above \(5\times 10^{10}\) M\({}_{\odot}\) that there is a larger tail of galaxies with lower in-situ and migrated mass in the center. Most galaxies situated in this space have a high ex-situ central mass fraction of 40% or higher.
_Ex-situ vs. in-situ and migrated mass (Figure 5, second and third panels):_ At roughly fixed central in-situ or migrated masses there is a large variety of ex-situ mass that is deposited in the center of galaxies. The scatter of the ex-situ mass in the center increases roughly from four to six dex the from smallest to largest galaxy stellar mass bin. Compared to that the scatter in the in-situ and migrated mass direction is rather small, being roughly one dex across all galaxy stellar mass bins. However, some galaxies in mass bins below \(5\times 10^{10}\) M\({}_{\odot}\) have lower in-situ masses of up to one dex below the majority of the other galaxies in their respective bins. Galaxies lying in this region have above average migrated fractions of 40% or more with some reaching extreme values of above 80%.
The spread in migrated mass compared to the in-situ mass is larger for the highest mass galaxies. This is mainly due to the increased stochasticity in the total _central_ stellar mass for the 18 galaxies above \(5\times 10^{11}\) M\({}_{\odot}\) in galaxy stellar mass, which almost spans one dex as opposed to only a quarter dex for galaxies between \(5\times 10^{10}\) and \(5\times 10^{11}\) M\({}_{\odot}\). These 18 galaxies lie between the 50% and 90% contour and have ex-situ masses spanning from \(10^{7}\) to \(10^{11}\) M\({}_{\odot}\). Their in-situ masses are exclusively below the 50% contour, whereas their respective migrated masses can lie towards lower or higher values.
The peak of the central ex-situ mass distributions begins to rise for galaxies above \(10^{10}\) M\({}_{\odot}\) in total stellar mass, going from about three dex below the one-to-one relation to one dex. This break point roughly translates to \(4\times 10^{8}\) M\({}_{\odot}\) in central in-situ mass and \(1-2\times 10^{8}\) M\({}_{\odot}\) in central migrated mass. The former roughly coincides with the critical mass needed for the SMBH to be in the kinetic feedback mode (e.g. Zinger et al., 2020, Figure 1). The _total_ ex-situ mass also begins to rise for galaxies with total stellar masses above a few \(10^{10}\) M\({}_{\odot}\) (see Figure 11).
_Central ex-situ vs. total ex-situ mass (Figure 5, fourth panel):_ Lastly, we also show the correlation between the _central_ ex-situ mass and the _total_ ex-situ mass, i.e. all stars that were ever accreted onto the z = 0 host galaxy. For fixed total galaxy stellar mass the slope of the contours depict that a higher total ex-situ mass generally also implies a higher central ex-situ mass. The slope of this correlation is rather steep. While the total ex-situ mass spans approximately two dex per galaxy stellar mass bin, the ex-situ mass in the center spans four to six dex from the lowest to highest galaxy stellar masses. Consequently, it is quite stochastic which merging satellite galaxies deposit stellar mass in the center.
Furthermore, the central density contours shift closer to the one-to-one relation with increasing galaxy stellar mass. This means that more galaxies in the highest mass bin have mergers that are more effective in bringing a larger fraction of their total ex-situ mass into their center as opposed to lower mass galaxies. Nevertheless, the 90% contours for galaxy stellar masses above \(5\times 10^{9}\) M\({}_{\odot}\) extend right up to the one-to-one relation, meaning that there some galaxies that have almost all their ex-situ mass in the central 500 pc.
#### 4.1.3 Trends for different galaxy types
Galaxies with different present-day properties are thought to have undergone different formation pathways. Is this reflected in different contributions of in-situ, migrated and ex-situ stars building up the center of these galaxies?
In Figure 6 we show the running median of the central stellar mass of the three origins as a function of total galaxy stellar mass split into six different galaxy properties. The definitions of the different galaxy properties are summarized in Table 1 and described in detail in Appendix A1.
All in all, the most significant differences are seen in the central ex-situ population across various galaxy properties. This significance manifests in separation between the median relations including the scatter (which we do not show, however, in favour of clarity). Small differences for the in-situ and migrated populations are not significant with respect to the scatter around the median relations.
_Centrals vs. Satellites (Figure 6, top left panel):_ On average, centrals and satellites contain the same amount of central in-situ, migrated and ex-situ mass, showing that their central 500 pc is unaffected by their environment at z = 0. This is sensible considering that galaxy centers likely assemble before the galaxy becomes a satellites. Additionally, most environmental effects should first take effect in the outskirts of galaxies. Similarly, we find no significant difference in the central mass of the three populations, when the galaxies are divided by the mass of their host halo.
_Quenched vs. Star Forming (Figure 6, top middle panel):_ Quenched galaxies between \(5\times 10^{9}\) and \(5\times 10^{10}\) M\({}_{\odot}\) have slightly higher central in-situ and migrated mass than for star forming ones. This difference primarily arises because the star forming galaxies tend to have lower central densities on average than quenched ones.
A larger difference is seen in the ex-situ. For galaxy stellar masses above \(10^{10}\) M\({}_{\odot}\) the average ex-situ mass starts to rise more rapidly for star forming galaxies than for quenched ones. For galaxies around \(5\times 10^{10}\) M\({}_{\odot}\) this difference becomes largest, with the median central ex-situ mass of star forming galaxies being higher by more than one dex.
While this trend may seemingly be counter-intuitive for the current consensus of galaxy evolution, the difference is also true when considering the _total_ ex-situ stellar mass in TNG50 (and also TNG100) as seen in Figure 11 in Appendix B2. This could be an indication that today's star forming galaxies had more or larger mass ratio mergers with galaxies with high gas content at later cosmic times (see Section 5.1 for a further discussion). We obtain a consistent picture when galaxies are divided according to their \(g-i\) colour or total gas mass at z = 0.
_Bulgev vs. Disky (Figure 6, top right panel):_ The in-situ and migrated central mass for disky and bulgev galaxies show a similar trend as for the star forming and quenched population. However, the trend for the central ex-situ mass is distinct. Bulgev galaxies below \(10^{10}\) M\({}_{\odot}\) in stellar mass have higher ex-situ masses (by roughly half a dex) in their centers than their disky counterparts. This difference disappears for galaxy stellar masses above \(10^{10}\) M\({}_{\odot}\).
We have checked the median relation for the _total_ ex-situ mass and find that bulgev galaxies have a constant higher offset of about 0.25
dex compared to disk galaxies across the whole galaxy mass range. Hence, disky galaxies below \(10^{10}\,\mathrm{M}_{\odot}\) have not only lower absolute central and total ex-situ masses, but also a lower central-to-total ex-situ fraction of about 0.4% as compared to 1% for bulge dominated galaxies. Thus the relative amount of ex-situ mass that is deposited in the center might be an important driver for morphological transformation in these lower galaxy mass regimes.
For galaxies above \(10^{10}\,\mathrm{M}_{\odot}\) in stellar mass, the central-to-total ex-situ fraction decreases strongly as a function of galaxy stellar mass, with disk galaxies having consequently slightly higher values. This could be an indication that once a massive rational support exists in the stellar component it is hard to destroy it through mergers. Similar relations are found when adopting other definitions for disky and bulgy galaxies, such as the ratio of the kinetic energy in ordered motion compared to the total kinetic energy (see Rodriguez-Gomez et al., 2017).
_Barred vs. No Bar (Figure 6, bottom left panel)_: For galaxies below \(10^{10}\,\mathrm{M}_{\odot}\) TNG50 predicts no difference in the central in-situ and migrated mass, however the galaxies with bar-like features have higher ex-situ masses than galaxies with no bar-like features. This trend is similar for the bulgey vs. disky galaxies. We have explicitly checked that indeed high ex-situ masses in the center of galaxies within this mass regime mainly occur in bulgey and barred galaxies, whereas bulgey and unbarred galaxies as well as disky galaxies, both barred and unbarred, have lower central ex-situ masses by approximately one dex.
For galaxies above \(10^{10}\,\mathrm{M}_{\odot}\) in total stellar mass, this relation for the central ex-situ mass swaps. In this regime unbarred galaxies have higher ex-situ masses in the center regardless whether they are disky or bulge. We find that the same statements for barred and unbarred galaxies across the entire mass range are true when correlating the _total_ ex-situ mass of galaxies.
Lastly, the in-situ and migrated mass in the center is higher for barred galaxies between \(10^{10}\,\mathrm{M}_{\odot}\) and \(10^{11}\,\mathrm{M}_{\odot}\). Hence, barred galaxies in this mass regime have higher central densities than unbarred galaxies in TNG50, which is consistent with observations (see Diaz-Garcia et al., 2016).
_Over- vs. Undermassive Black Holes (Figure 6, bottom middle panel)_: We could expect that AGN feedback has an influence on the stellar mass growth in the center of galaxies. We therefore split our TNG50 sample according to whether the galaxies have an over- or undermassive black hole at z = 0. Identical relations are found when the galaxy population is split according to the cumulative energy injection of each feedback mode or both.
Figure 6: **Differences in the central (500 pc) in-situ, migrated and ex-situ populations for different galaxy properties of TNG50 galaxies at z = 0.** Each panel shows median trends of central stellar mass divided in the three origins: in-situ (_pink_), migrated (_orange_) and ex-situ (_blue_) as a function of the galaxies’ total stellar mass. The dashed and solid lines split the TNG50 galaxy population according to different properties, which are (_bform left to right_): central vs. satellite, quenched vs. star forming, bulge vs. disky, ‘barred’ vs. ‘non barred’, overmassive vs. undermassive black holes (BH) and extended vs. compact galaxies. The bracketed numbers show the total amount of galaxies in each category.
On average, galaxies between \(10^{10}\,\mathrm{M}_{\odot}\) and \(10^{11}\,\mathrm{M}_{\odot}\) in stellar mass with an undermassive black hole have a higher central ex-situ mass by about one dex than galaxies with an overmassive black hole at the same stellar masses. For galaxies with total stellar masses in the range of \(5\times 10^{9}-5\times 10^{10}\,\mathrm{M}_{\odot}\), the ones with an overmassive black hole have in-situ and migrated masses in the center that are about half a dex higher than for galaxies with an undermassive black hole. Consequently, galaxies with overmassive black holes in this mass regime have higher central densities.
We find that mainly all of these differences in the in-situ, migrated and ex-situ mass for galaxies with over- and undermassive black holes emerge because galaxies at fixed stellar mass with overmassive black holes tend to be more compact in TNG50 and vice versa. Therefore, a similar behaviour of the central stellar mass in the three populations with total galaxy stellar mass is found when the galaxy population is split into compact and extended galaxies (see below). This connection between black hole masses, central densities and sizes of galaxies at fixed galaxy stellar mass is also found in observations (Chen et al., 2020).
_Extended vs. Compact (Figure 6, bottom right panel)_: Extended galaxies tend to have on average more ex-situ mass in the center than compact galaxies at the same total stellar mass. The difference is around one dex for galaxies above \(10^{10}\,\mathrm{M}_{\odot}\) in stellar mass.
When we correlate with the _total_ ex-situ mass, we find an opposite behaviour in TNG50. Galaxies \(\lesssim 5\times 10^{10}\,\mathrm{M}_{\odot}\) and with higher total ex-situ fractions are on average more extended (see Figure 2 in Appendix C).
Compact galaxies between \(5\times 10^{9}\,\mathrm{M}_{\odot}\) and \(5\times 10^{10}\,\mathrm{M}_{\odot}\) have more in-situ and migrated mass in the center, and therefore higher central densities (and black hole masses, see above). As a matter of fact, this difference is also seen for quenched vs. star forming and bulgey vs. disky galaxies, even though to a lesser extent. This stems from the fact that generally star forming galaxies tend to be more disky and hence more extended and vice versa.
### Stellar population and dynamical properties
Are there distinguishable features in the stellar population and dynamical properties of the in-situ, migrated and ex-situ stars? The short answer is yes, especially for galaxies below \(\lesssim 10^{11}\,\mathrm{M}_{\odot}\), where the majority of ex-situ stars originate from lower mass satellites.
#### 4.2.1 Average age, metallicity and [Mg/Fe] of central stars
Metallicities, ages and magnesium-to-iron abundances [Mg/Fe] of stars encode information about their birth places. Figure 7 illustrates average quantities of stellar populations belonging to the in-situ, migrated and ex-situ origin as a function of their galaxy's stellar mass. We also show separate relations for migrated stars that have birth radii larger than \(1\,\mathrm{kpc}\) to exclude the majority of migrated stars that were born close to the center, which dominate the average stellar population properties (see smoothly migrated stars in Figure 3).
_Metallicity (Figure 7, left panel)_: Stars in the central \(500\,\mathrm{pc}\) follow a mass-metallicity relation, where galaxies at the lowest mass end (\(5\times 10^{8}\,\mathrm{M}_{\odot}\)) have on average solar metallicity and galaxies at the highest mass end (\(\sim 10^{12}\,\mathrm{M}_{\odot}\)) have metallicities of around \(0.5\,\mathrm{dex}\). The total mass-metallicity relation of all the stars in the center is very close to the one for the in-situ population only, as they dominate the mass in the center of galaxies on average (see Figure 4).
Furthermore, the average metallicity for central stars is consistently offset by about \(0.3\,\mathrm{dex}\) towards higher metallicities across the whole galaxy mass range compared to the mass-metallicity relation which takes into account all the stars belonging to a given galaxy. This emphasizes the self-similarity of galactic chemical enrichment.
On top of that, the total central mass-metallicity relation is tight having a scatter of around \(0.1\,\mathrm{dex}\), which also holds when only in-situ or only migrated stars are considered. Hence, there is little galaxy-to-galaxy variation at fixed stellar mass regarding in-situ star formation.
The average metallicity of the in-situ population is the highest, followed by the migrated stars, which is less than a quarter dex lower across the whole galaxy mass range. This small difference is expected as most of the migrated stars are born very close to the center (\(0.5-1\,\mathrm{kpc}\)). When including only migrated stars with large birth radii (\(>1\,\mathrm{kpc}\)), the difference becomes larger to about half a dex due to internal metallicity gradients present in galaxies, which is in turn caused by less efficient star formation in the galactic outskirts. Above galaxy stellar masses of \(2\times 10^{11}\,\mathrm{M}_{\odot}\) the average metallicity of all migrated stars and only those with birth radii larger than \(1\,\mathrm{kpc}\) becomes similar again. This is because migrated stars from clumps are dominating at these galaxy masses, which originate from larger distances (median distance is \(30\,\mathrm{kpc}\)) and have high metallicities (median metallicity is \(0.2\,\mathrm{dex}\)).
The mass-metallicity relation for the central ex-situ stars follows a steeper slope than the one for the in-situ and migrated stars, because we are showing the mass of the \(\mathrm{z}=0\,\mathrm{host}\) galaxy and not of the galaxy they were born in. The average metallicity of ex-situ stars is around \(0.5\,\mathrm{dex}\) lower at the lowest galaxy masses compared to the metallicity for the in-situ stars. At the highest mass end the average metallicity of the ex-situ stars becomes close to the one for the migrated stars, which is around \(0.25\,\mathrm{dex}\). This steeper slope emphasizes that ex-situ stars in the center of low mass galaxies originate from galaxies of even lower mass, while most of the central ex-situ stars in high mass galaxies originate from galaxies of more similar stellar mass.
Lastly, the galaxy-to-galaxy variation at fixed galaxy stellar mass for the average metallicity of ex-situ stars is much larger compared to the in-situ and migrated population. The scatter varies from around one dex at the low mass galaxy end to close to a quarter dex for the highest galaxy masses. This emphasizes that at lower host galaxy stellar mass, a larger variety of satellite galaxies (i.e. with different stellar masses) can deposit stars in the center of their respective z = 0 hosts.
_Age (Figure 7, middle panel)_: The ex-situ stars have a rather constant, old age of around \(10\,\mathrm{Gyr}\) across the whole galaxy mass range, albeit with a large scatter of around \(2\,\mathrm{Gyr}\). This is not surprising as most mergers happen before the redshift of one, which corresponds to a lookback time of around \(8\,\mathrm{Gyr}\). The flat relation for the average age of the ex-situ stars is not in conflict with their corresponding mass-metallicity relation. Because high mass galaxies are more efficient in chemical enrichment than low mass galaxies, they will consequently have higher metallicities at fixed stellar age.
The median relations for the average age for the in-situ and migrated stars are again similar to each other with the in-situ stars being slightly older by around \(1\,\mathrm{Gyr}\) or less at fixed galaxy stellar mass. Overall, in-situ and migrated stars are younger, with average ages between \(3\) and \(6\,\mathrm{Gyr}\), in the lowest mass galaxies (\(\sim 10^{9}\,\mathrm{M}_{\odot}\)), and become increasingly older with average ages of around \(8-10\,\mathrm{Gyr}\) at the highest mass end (\(\sim 10^{12}\,\mathrm{M}_{\odot}\)).
The scatter of the average ages for the in-situ and migrated stars is much larger than their corresponding variations in metallicity. This could have multiple reasons, for example: different pathways in star
formation histories (i.e. star formation rate as a function of time) can result in the same metallicity but different average ages, or the metallicity enrichment starts to saturate once a metallicity above solar is reached and therefore it does not matter, if star formation continues for another few Gyr.
Galaxies above \(10^{11}\,\mathrm{M}_{\odot}\) in stellar mass exhibit a larger scatter of the average age of their migrated population compared to their in-situ stars. This arises because migration to the center in this regime is dominated by clumps, which have a rather flat formation time distribution with the majority forming between 4 and 10 Gyr ago (see Figure 11).
Below galaxy stellar mass of \(10^{11}\,\mathrm{M}_{\odot}\), the migrated stars, which were born at distances larger than 1 kpc, have a running median of averages ages that are around \(1-2\) Gyr older than the total migrated population. As these stars need significantly more time to arrive in the center, their ages are consequently older.
_[Mg/Fe] (Figure 7, right panel)_: In extragalactic studies magnesium is the predominant \(\alpha\)-element present in optical spectra (see e.g. Martin-Navarro et al., 2018, 2019, 2021; Gallazzi et al., 2021). We therefore show the running median of the mass-weighted average magnesium-to-iron abundance as a function of galaxy stellar mass as a proxy for the total \(\alpha\)-to-iron abundance. This abundance ratio provides to first hand information about the star formation time scale before supernovae type Ia is significantly enrich the interstellar medium with iron peak elements8.
Footnote 8: Influences on [\(\alpha\)/Fe] due to IMF (initial mass function) changes are not captured in the simulation, as a Chabrier IMF (Chabrier, 2003) is assumed for every stellar particle
The average central [Mg/Fe] is almost constant for the in-situ population across the whole galaxy mass range with a value of about 0.3 dex. For galaxies between \(2\times 10^{9}\,\mathrm{M}_{\odot}\) and \(6\times 10^{10}\,\mathrm{M}_{\odot}\), the migrated stars have slightly lower values. The lowest average [Mg/Fe] of around 0.25 dex is reached for galaxies around \(2\times 10^{10}\,\mathrm{M}_{\odot}\). This directly maps to the increased difference of the average age between in-situ and migrated stars of around 1 Gyr in the same mass regime. Hence, in-situ stars of these galaxies form on average earlier and more rapidly as opposed to their migrated stars.
Above \(6\times 10^{10}\,\mathrm{M}_{\odot}\), the average [Mg/Fe] for the migrated populations rises above the one for the in-situ stars to around 0.35 dex at the highest mass end. This cross-over is not seen in the average ages. An explanation for this could be that in the high galaxy mass regime, an increasing number of migrated stars can originate from larger distances and possibly formed from stripped gas of merging lower mass systems (see Section 3.2), which have larger [Mg/Fe] values due to lesser efficiency in chemical enrichment.
When only including migrated stars originating from distances farther than 1 kpc away from the center, the average [Mg/Fe] becomes larger by around 0.1 dex across all galaxy stellar masses. For galaxies below \(\sim 10^{11}\,\mathrm{M}_{\odot}\) the corresponding ages become older, which is thus consistent in having formed from true in-situ gas of their respective host galaxies.
The age for migrated stars with birth radii \(>1\) kpc in galaxies above \(10^{10}\,\mathrm{M}_{\odot}\) does not increase even though their [Mg/Fe] increase as well. This could indeed provide evidence for some migrated stars having formed from stripped gas for galaxies in this mass regime.
The ex-situ stars have overall higher average [Mg/Fe] values of around 0.45 dex, which decreases to around 0.35 dex for galaxies above \(10^{11}\,\mathrm{M}_{\odot}\) in stellar mass. This is consistent with their old ages and being formed in lower mass satellite galaxies that produced stars less efficiently than their respective z = 0 hosts.
The scatter in average [Mg/Fe] for the ex-situ population is significantly larger than for the in-situ and migrated population across all galaxy masses, but especially \(\lesssim 10^{11}\,\mathrm{M}_{\odot}\). The onset of type Ia supernovae creates probably more stochasticity in lower mass galaxies as single supernovae events can significantly enrich the interstellar medium of the entire host galaxy.
Figure 7: **Average stellar population properties of in-situ, migrated and ex-situ stars in the central 500 pc of TNG50 galaxies at z = 0**.**_From left to right:_ The central mass-weighted average metallicity, age and magnesium-to-iron abundance [Mg/Fe] as a function of total galaxy stellar mass for the in-situ (_pink_), migrated (_orange_) and ex-situ (_blue_) stars. The shaded bands depict the 16th and 84th percentile. The dashed orange line only shows the quantities for migrated stars that have birth radii larger than 1 kpc. Overall, the stellar population properties of in-situ and migrated stars are similar, whereas the ex-situ stars are more metal-poor, older and have higher [Mg/Fe] across the galaxy mass range.
#### 4.2.2 Stacked circularity distributions
Different birth origins also leave imprints on the stars' dynamics, which can still be visible until the present-day. We investigate such imprints by quantifying the instantaneous circularity \(\epsilon\) of stars (see Zhu et al., 2021). Circularities close to one indicate circular orbits, values around zero indicate random motion dominated orbits and negative ones show counter-rotating orbits. We then compute the normalized circularity distribution for each galaxy with a bin size of 0.1 for \(\epsilon\). Circularity distributions are than stacked together according to the galaxy's total stellar mass in bins of approximate 0.5 dex and are re-normalized.
The results for the different in-situ, migrated and ex-situ populations are displayed in Figure 8. The lines in Figure 8 trace the peak of the circularity distributions across galaxy stellar mass, separately for disky and bulgey galaxies.
The circularity distribution of the in-situ population is centered on random motion dominated orbits for galaxies with stellar masses smaller than \(3\times 10^{9}\,\mathrm{M}_{\odot}\) and larger than \(10^{11}\,\mathrm{M}_{\odot}\). Galaxies with stellar masses in between have a circularity distribution with a peak shifted towards slightly higher circularities of around 0.25. We see that this shift is caused by galaxies that are overall disky, as the bulge dominated galaxies have a circularity peak that stays around zero. Nevertheless, the in-situ stars are in summary on warm to hot orbits even for disk dominated galaxies, which is not surprising as the velocity dispersion generally rises towards the center of galaxies.
For galaxies below \(10^{10}\,\mathrm{M}_{\odot}\) the stacked circularity distributions for in-situ stars have a sharper peak, whereas galaxies of higher masses have an overall broader distribution in \(\epsilon\). This could be an indication of a smaller galaxy-to-galaxy variation of the circularity distribution in the center of the smallest galaxies, regardless of whether they are disky or bulgey, as in this mass regime the absolute numbers of those two galaxy types are approximately the same in TNG50. At the high mass end on the other hand, a broader circularity distribution could indicate that in-situ stars become redistributed in their orbits due to the increased influence of mergers and contribution of ex-situ stars.
For the migrated population the circularity distribution is again centered on random motion orbits for galaxies \(<3\times 10^{9}\,\mathrm{M}_{\odot}\) and \(>10^{11}\,\mathrm{M}_{\odot}\), although now the distribution is also overall broader for the low mass galaxies. Migrated stars in intermediate mass galaxies are on even higher circularity orbits than their corresponding in-situ stars reaching a peak of around 0.5 for galaxy stellar masses of \(\sim 10^{10}\,\mathrm{M}_{\odot}\). This peak is seen in disk and bulge dominated galaxies alike. Hence, migrated stars tend to have the most rotational support for galaxies in the intermediate mass regime, which could be an indication for migration being caused by different mechanisms across the galaxy mass range in TNG50. However, migrated stars are also on average younger than the in-situ stars in these galaxies, which might be the reason why they are still more on circular orbits (see Figure 7). We also point out that some galaxies above \(\sim 3\times 10^{10}\,\mathrm{M}_{\odot}\) have a very double peaked (i.e. one around zero and around 0.5 or higher) circularity distribution, which is washed out in Figure 8 due to the stacking. These stars originate from (recently) migrated clumps.
Ex-situ stars have circularities centered around zero across the entire galaxy mass range in TNG50 and also for both disk and bulge dominated galaxies. Because they originate from stochastic merger events, stars are put on average on hot, random motion dominated orbits. Nevertheless, we see a a large scatter throughout the circularity distributions for the ex-situ stars in the different galaxy stellar mass bins indicating a lot of individual galaxy-to-galaxy variation. Depending on the exact time the merger occurred and how the orbits between the host and merging satellite were configured, ex-situ stars can very well retain some rotational support and often be on counter rotating orbits.
#### 4.2.3 2D distributions of ages, metallicity and circularities
In Figures 9 and 10 we show the 2D distributions of age and metallicity and age and circularity of the central in-situ, migrated and ex-situ stars respectively in stacks of galaxy stellar mass. For each galaxy we first compute the mass-weighted and normalized 2D histogram of the respective quantities with bin sizes of 0.5 Gyr for age, 0.25 dex for metallicity and 0.1 for circularity. We then stack those according to the total stellar mass bin of the galaxies, normalize again
Figure 8: **Average differences in the dynamical properties of in-situ, migrated and ex-situ stars in the central 500 pc of TNG50 galaxies at \(\mathbf{z=0}\).**_From left to right_: The central circularity distributions in stacks of total galaxy stellar mass for in-situ (_pink_), migrated (_orange_) and ex-situ (_blue_) stars. Per galaxy, stellar particles in the center belonging to either the in-situ, migrated or ex-situ population are binned according to their circularity \(\epsilon\) and normalized to unity respectively. They are then stacked together according to the displayed galaxy stellar mass bins and then normalized again. The lines trace the circularity bin with the maximum fractional mass across galaxy stellar masses divided by bulgey (_solid_) and disky (_dashed-dotted_) galaxies respectively. Clearly the migrated population has the most rotational support for galaxies around \(10^{10}\,\mathrm{M}_{\odot}\) in total stellar mass, regardless of the host being disk or bulge dominated.
and then compute the Gaussian kernel density estimate. The galaxy stellar mass bins are 0.5 dex wide for galaxies between \(10^{8.75}\,\mathrm{M}_{\odot}\) and \(10^{10.75}\,\mathrm{M}_{\odot}\). We stack all galaxies with stellar masses between \(10^{10.75}\,\mathrm{M}_{\odot}\) and \(10^{12}\,\mathrm{M}_{\odot}\) together as a finer binning did not reveal any mass-dependent trends and also became stochastic due to low number statistics in this mass regime. The five galaxies with stellar masses above \(10^{12}\,\mathrm{M}_{\odot}\) are not included. Additionally, we show the stacked age-metallicity distributions for quenched and star forming galaxies separately to avoid averaging over too many dissimilar galaxies in this parameter space. Similarly, we divide between bulge and disk dominated galaxies for the age-circularity distributions.
With the 2D distributions we can observe a couple of new trends that are not necessarily apparent from the average stellar population properties in Figure 7 and the 1D circularity distributions of Figure 8.
_Age-metallicity (Figure 9)_: For star forming galaxies (bottom row of Figure 9), the average distribution of the migrated stars changes very little in shape and position, apart from shifting towards higher metallicities, from the lowest mass galaxies until the \(10^{10.5}\,\mathrm{M}_{\odot}\) galaxy stellar mass bin. They are centered between \(2-4\) Gyr. The ex-situ stars behave similarly and are centered around 12 Gyr. However, the average distribution of the in-situ stars shows an entirely different mass trend. While the in-situ stars are almost entirely coinciding with the migrated stars in age-metallicity space for the lowest mass bin, the peak of the in-situ distribution gradually shifts towards older ages from 2 Gyr to 8 Gyr for increasing galaxy mass. In the process the in-situ average age-metallicity distribution becomes more elongated around \(10^{9.5}\,\mathrm{M}_{\odot}\) in the age direction when focusing on the 20% contour.
For \(10^{10}\,\mathrm{M}_{\odot}\) galaxies the in-situ distribution becomes more centrally concentrated again. Furthermore, the average age-metallicity distributions of the three origins are maximally separated in this mass regime, with the migrated stars being the youngest (\(1-6\) Gyr for the 20% contour), followed by the in-situ stars (\(6-10\) Gyr for the 20% contour) at similar metallicity and the ex-situ stars populating the oldest (\(10-13\) Gyr for the 20% contour) and most metal-poor tail. Thus, there must be a mechanism for these galaxies that halts in-situ star formation in their centers, while it continues outside of it in order to be able to produce young migrated stars. It is likely that this is connected to the (kinetic) AGN feedback implement in TNG, which quenches galaxies from inside-out (Nelson et al., 2021, see also 5.2 and Figure 11).
Starting at \(10^{10.5}\,\mathrm{M}_{\odot}\) the average age-metallicity distribution of the migrated stars also becomes more elongated towards older ages and above \(10^{10.5}\,\mathrm{M}_{\odot}\) coincides again with the one of the in-situ stars. The peak of the ex-situ distribution increases towards metallicities similar to those of the in-situ and migrated stars. For galaxies between \(10^{11}\,\mathrm{M}_{\odot}\) and \(10^{12}\,\mathrm{M}_{\odot}\) in stellar mass the average distributions for the in-situ, migrated and ex-situ stars become almost indistinguishable in age-metallicity space.
For quenched galaxies (top row of Figure 9) the behaviour for the age-metallicity distributions of the in-situ and migrated stars across galaxy stellar mass is different. They are not clearly separated in any galaxy stellar mass bin as was the case for the star forming galaxies. Both the in-situ and migrated average age-metallicity distributions are more centrally concentrated than for star forming galaxies, their shapes are very similar to each other and their peaks are both at old ages (around 8 Gyr) exhibiting little galaxy mass dependence. The peak of the age-metallicity distribution for the migrated stars seem to be slightly younger for galaxies in mass bins between \(10^{9.5}\,\mathrm{M}_{\odot}\) and \(10^{10}\,\mathrm{M}_{\odot}\) and slightly older otherwise. Interestingly at both the low and high mass end, the separation in metallicity between the in-situ and migrated stars is larger for the quenched galaxies as for the star forming ones. For galaxies between \(10^{11}\,\mathrm{M}_{\odot}\) and \(10^{12}\,\mathrm{M}_{\odot}\) the three distributions are again indistinguishable.
_Age-circularity (Figure 10)_: Stars with higher circularities are
Figure 9: **Age-metallicity distributions of central (500 pc) stars of TNG50 galaxies in stacks of stellar mass at \(z=0\).** Gaussian kernel density estimates for in-situ (_pink_), migrated (_orange_) and ex-situ (_blue_) stars encompassing \(1\)\({}^{\mathrm{\%}}\), 20%, 50% and 90% of all central stellar mass (_thickest to thinnest_) are shown in the respective galaxy stellar mass bin increasing from left to right as depicted by the colorbar. The galaxy mass bins are centered on the indicated stellar mass and are 0.5 dex wide, except for the last panel, which is approximately one dex wide. Prior to stacking the age-metallicity distribution of each galaxy is normalized. The top row shows quenched and bottom row shows star forming galaxies respectively. In each panel the number of galaxies in the corresponding stellar mass bin are indicated. The galaxy-averaged age-metallicity distribution of the three origins becomes best separated around galaxies with stellar masses of \(10^{10}\,\mathrm{M}_{\odot}\).
usually younger. The distribution for migrated stars of disky galaxies (bottom row of Figure 10) in the lowest galaxy stellar peaks at around 2 Gyr with high circularity values of around 0.5, whereas the distributions for the in-situ stars peaks at older ages (6 Gyr) centered on circularity values of zero. Nevertheless, the 20% contour for the in-situ stars still has a tails towards younger ages and slightly above zero circularities. In the next higher galaxy stellar mass bin the 20% contour of the distribution for the migrated stars looses its tail of older ages (4 - 8 Gyr) and zero circularities. The 20% contour for the in-situ stars in now centered on even older ages (8 Gyr). Beginning around galaxy stellar masses of \(10^{10}\,\mathrm{M}_{\odot}\) the 20% contour for the migrated stars elongates towards older ages spanning now \(1-8\,\mathrm{Gyr}\), while roughly maintaining the high circularity. The distribution for the in-situ stars becomes broader and slightly shifts towards above zero circularities. In the \(10^{10.5}\,\mathrm{M}_{\odot}\) galaxy stellar mass bin the peak of the migrated stars shifts from young (2 Gyr) to old (8 Gyr) ages with just a slight decrease in circularity. Above galaxies with \(10^{10.5}\,\mathrm{M}_{\odot}\) in stellar mass the migrated stars switch from a rotationally supported distribution to random motion dominated one until they coincide with the age-circularity distributions of the in-situ and ex-situ stars at the highest galaxies.
The peak of the age-circularity distribution for the in-situ stars, albeit having the same young age as for the migrated stars in the lowest stellar mass bin, is near zero circularity. With increasing galaxy mass the age-circularity distribution for in-situ stars shifts towards old ages and becomes broader in the circularity direction, but stays mostly centered around zero circularity with perhaps a slight shift towards higher circularities around the \(10^{10}\,\mathrm{M}_{\odot}\) galaxy stellar mass bin as already observed in Figure 8. The age-circularity distributions for the ex-situ stars show practically no galaxy mass dependence; they are centered on random motion dominated orbits and the oldest ages.
The centers of bulge dominated galaxies (top row of Figure 10) above \(10^{9}\,\mathrm{M}_{\odot}\) have overall similar age-circularity distributions as disk dominated galaxies. However, the absolute values of the migrated distribution do not reach the same high circularities as for the disky galaxies and its peak transitions quicker to old ages (8 Gyr) between mass bins of 9.5 and 10 dex. Below \(10^{9}\,\mathrm{M}_{\odot}\) both the migrated and in-situ distribution are centered on zero circularities and old ages; distinct to the disky galaxies.
For both bulge and disk dominated galaxies the average age-circularity distribution of the in-situ, migrated and ex-situ stars are well separated in mass ranges between \(10^{9.5}\,\mathrm{M}_{\odot}\) and \(10^{10}\,\mathrm{M}_{\odot}\). This dependence of increasing circularity for younger ages, especially prominent for the migrated stars, gives an indication that recently (i.e. young) migrated stars travel to the center of their host galaxies by loosing their angular momentum ("churning"; see e.g. Frankel et al., 2020, for the Milky Way disk) and then, once they have arrived in the center, become dynamically heated over time.
## 5 Discussion, Implications and Outlooks
In this section we discuss the implications of the studied mass assembly of the central 500 pc in TNG50 galaxies on the formation scenarios of central galaxy components. We also discuss the clumps found in TNG50 as well as the robustness of our results within the TNG modelling framework. In addition, we assess how our results on the stellar population and dynamical properties can be compared to observations and used to understand the mass build-up of galaxies in general.
Figure 10: **Age-circularity distributions of central (500 pc) stars of TNG50 galaxies in stacks of stellar mass at z = 0**. Gaussian kernel density estimates for in-situ (_pink_), migrated (_orange_) and ex-situ (_blue_) stars encompassing 1%, 20%, 50% and 90% of all stellar mass (_thickest to thinnest_) are shown in the respective galaxy stellar mass bin increasing from left to right as depicted by the colorbar. The galaxy mass bins are centered on the indicated stellar mass and are 0.5 dex wide, except for the last panel, which is approximately one dex wide. Prior to stacking the age-circularity distribution of each galaxy is normalized. The top row shows bulge dominated and bottom row shows disk dominated galaxies respectively. In each panel the number of galaxies in the corresponding stellar mass bin are indicated. The galaxy-averaged age-circularity distribution of the three origins becomes best separated around galaxies with stellar masses of \(10^{10}\,\mathrm{M}_{\odot}\).
### The build-up of galaxy centers in a \(\Lambda\)CDM cosmology
Throughout this paper, we have unravelled a set of relations between the properties of the stellar centers of galaxies at \(\mathrm{z}=0\). Galaxy centers are dominated by in-situ stars (see Figure 4) and follow well established relations (e.g. Gallazzi et al., 2005) that correlate their increasing stellar masses with increasing average ages and metallicities (see Figure 7). Stars that migrated to the center are second most abundant. They follow the trends for the in-situ stars, however are often distinctively younger and on more rotation supported orbits. Ex-situ stars in the center become only significant (in mass) at high galaxy stellar masses (\(>10^{11}\,\mathrm{M}_{\odot}\)) (see Figure 4). The majority of ex-situ stars originate from the centers of the accreted galaxies (see also Gao et al., 2004). Moreover, they are amongst the oldest and most metal-poor and random motion dominated stars (see Figures 7 and 8 as well as), which is in agreement with El-Badry et al. (e.g., 2018), who studied three MW-like galaxies from the Latte project (Wetzel et al., 2016).
While these trends are consistent with our general understanding of galaxy formation in a \(\Lambda\)CDM cosmology, we find others that may be more surprising. For example, there seems to be no average difference between the central mass assembly of central and satellite galaxies (see Figure 6). Generally, central galaxies are thought to have accreted more satellite galaxies. We have checked this relation also for the total accreted mass within TNG50 and also found no significant difference between centrals and satellites on average. Thus, perhaps TNG50 is not probing enough very high mass central galaxies around stellar masses of \(10^{12}\,\mathrm{M}_{\odot}\), where this trend might become apparent.
Another, rather unexpected result compared to usual assumptions, is that star forming galaxies above \(10^{10}\,\mathrm{M}_{\odot}\) possess on average more ex-situ mass in their centers compared to quenched ones (see Figure 6). Again, this difference, even though much less significant remains when considering the total amount of ex-situ mass (see Figure 11). This trend also exists for the larger box of TNG100, thus eliminating the fact for low number statistics at the higher mass end, and is in contrast to the original Illustris simulation (see Rodriguez-Gomez et al., 2016, Figure 5). We have checked the median mass growth of the central ex-situ stars for star forming and quenched galaxies alike between stellar masses of \(10^{10}-10^{11}\,\mathrm{M}_{\odot}\) and found that quenched galaxies stop acquiring ex-situ mass in their centers after \(\mathrm{z}\sim 1.7\) (lookback time \(\sim 10\,\mathrm{Gyr}\)). Only if we split quenched galaxies further into bulgey and disks as well as barred and non barred do we see that quenched, bulgey and non barred galaxies have a similarly high ex-situ mass in their centers as their star forming counterparts. Together, this is an indication that the time of accretion and consequently the absolute amount of stellar and gas mass of the secondary galaxy (the former will be higher at later cosmic times and the latter will influence the amount of newly formed stars during the merger process) will matter in the build-up of ex-situ mass in the center of the primary and ultimately dictate what properties it has today.
On top of that, the fraction of in-situ, migrated and ex-situ stars in the center of galaxies has a significant scatter at fixed galaxy stellar mass regardless of the galaxy's bulk properties at \(\mathrm{z}=0\) (see Figure 5). Hence, median trends for different galaxy populations only reveal half of the picture, as the stochasticity of galaxy mergers and interactions in a \(\Lambda\)CDM cosmology leads to diverse pathways in the build-up of stellar mass in the centers of galaxies. Thus, characteristic properties of galaxies at \(\mathrm{z}=0\) are only a limited indicator of the exact formation history of an individual galaxy. For example, there are perfectly regular MW-like spiral galaxies in TNG50 \(\mathrm{z}=0\), of which some have experienced (multiple) major mergers and of which some had a more quiet assembly (see also Sotillo-Ramos et al., 2022).
This diversity in the central 500 pc of TNG50 galaxies potentially reflects the variety of central galaxy components seen in observations (see Section 1). Even though nuclear rings, disks and star clusters are at or below the resolution limit of TNG50, the stellar population and dynamical properties that we find for central stars of different origins might be a first indication that this would also manifest in structurally distinct components. For example, the distinctly high circularities of migrated stars in \(10^{9}-10^{10}\,\mathrm{M}_{\odot}\) galaxies (see Figure 10) reflect that nuclear disk-like configurations are able to arise. Even more intriguing are their predominantly younger ages of \(1-2\,\mathrm{Gyr}\) compared to the underlying old (\(\sim 8\,\mathrm{Gyr}\)) in-situ population, which is in line with observational findings of nuclear disks/rings (Bittner et al., 2020). Typically, the formation of nuclear rings in disk galaxies is associated with bars funnelling gas towards the center (see e.g. Seo et al., 2019; Tress et al., 2020; Sormani et al., 2020, for dedicated simulations). Even though we did not explicitly investigate the inflow of gas in this study, we see that the migration of stars to the center is likely connected to temporarily induced non-axisymmetries during galaxy interactions (see Section 5.2). Hence, this shows that mechanisms that are associated with producing distinct nuclear galaxy components are captured in TNG50. Follow-up zoom-in simulations of TNG50 galaxies would show if indeed nuclear components such as disks and rings form from these mechanisms (see Section 5.4).
### Mechanisms for the formation and deposit of stars in the center of galaxies
The cosmological framework of TNG50 produces diverse properties of galaxies and their centers. Consequently, the mechanisms that are responsible for the formation and deposit of stars in the centers of galaxies also have to be diverse.
To visualize possible mechanisms for the formation and deposit of stars in the center of galaxies we walk through the central assembly history of an individual galaxy as seen in Figure 11. We picked this particular galaxy, which has a stellar mass of \(10^{10.8}\,\mathrm{M}_{\odot}\) at \(\mathrm{z}=0\), as it shows many of the possible mechanisms that can be present in the formation of galaxy centers. This however does not mean that all galaxies show the same amount of complexity. Most galaxies will only exhibit one or two of these mechanisms with varying impact depending on their individual formation pathway. The galaxy's center at \(\mathrm{z}=0\) consists of around 50% in-situ, 30% migrated and 20% ex-situ stars.
The main summary of the subsequent sections and Figure 11 is the following: galaxy mergers and other interactions are probably the most important driver in central stellar mass assembly, as they also strongly influence the formation of in-situ stars. Due to the diverse statistics of galaxy interactions, many of the proposed formation scenarios of central galaxy components arise naturally and in conjunction to each other, when hierarchical galaxy formation is considered. Thus, TNG50 highlights the necessity to study the central mass assembly of galaxies in a cosmological context.
#### 5.2.1 In-situ stars
Galaxy mergers can trigger bursts of star formation as the tidal forces compress and shock gas efficiently (e.g Mihos & Hernquist, 1996; Di Matteo et al., 2007; Cox et al., 2008; Di Matteo et al., 2008), even in its nuclear region (Powell et al., 2013). While the relative enhancement of star formation rates depend on the specific configuration of the merging galaxies, e.g. merger mass ratio, gas content, orbital infall parameters, the times of intense star formation coincide with
Figure 11: **Central (500 pc) assembly history of an individual galaxy (SubfindID 184937) in TNG50 with a total stellar mass of 10\({}^{10.8}\) M\({}_{\odot}\). This galaxy encompasses many mechanisms that can shape the stellar mass build-up in the center of galaxies in a \(\Lambda\)CDM cosmology.**_Top panel_: Points show all individual stellar particles that belong to that galaxy at z = 0. Their distance at the time of birth is shown with respect to their current host in the case of in-situ formed stars (_light gray_: all in-situ particles, _pink_: central in-situ stars only, _orange_: central migrated stars only). In the case of the ex-situ formed stars the distance is shown with respect to their future host galaxy at the time of birth (_color-coded according to the colorbar_: all ex-situ stars, _blue_: central ex-situ stars only). The distance to individual satellite galaxies (only with maximum stellar masses above 10\({}^{8}\) M\({}_{\odot}\)) that will merge with the primary at some point are shown with thinner solid lines. Their coloring also follows the colorbar, which visualizes the merger mass ratio taken at the time t\({}_{\rm max}\), when the secondary galaxy reaches maximum stellar mass. The thick black solid line shows the radius of the FoF Group the galaxy belongs to at a given lookback time (represented as R\({}_{200}\), where the group's density is 200 times the critical density of the Universe). The thick gray dashed line shows the distance between the individual galaxy and the central galaxy of the FoF group it belongs. Approximately 7 Gyr ago the galaxy fell into another group and became a satellite galaxy. Before that it was the central of its own FoF group. The vertical black dotted line represents the time the kinetic AGN feedback starts to take effect, which quenches the center. This galaxy has 50% in-situ, 30% migrated (of which only 9% are'smoothly' migrated and the rest comes from migrated clumps) and 20% ex-situ stars in its center. _Bottom panel_: Histograms of formation times of in-situ (_top_), formation (_solid_) and arrival (_dashed-dotted_) times at the center for'smoothly' migrated (_middle_) as well as ex-situ and 'clumpy' migrated (_bottom_) stars. Additionally, in the panel for the in-situ stars, we mark the time of coalescence for the six most massive mergers of this galaxy with thin blue colored solid lines. According to the colorbar of the top panel, a darker blue means a higher merger mass ratio. Pericenter passages for two mergers are shown by thin dashed lines following the same colorcode. The approximate time of the galaxy falling into its z = 0 FoF group is shown by the thick black solid line and the onset of the kinetic AGN feedback is shown as the black dotted line. In the panel for the ‘smoothly’ migrated stars we also show the A\({}_{2}\) mode of the stars for a given lookback time (see Appendix A.1 for a definition). In the panel for the ex-situ and ‘clumpy’ migrated stars, we show the time of coalescence of the two mergers that deposited ex-situ stars in the galaxy’s center (_blue solid lines_) as well as the three pericenter passages of the galaxy around its central galaxy after it became a satellite (_gray dashed line_).**
pericenter passages and coalescence (see also Sotillo-Ramos et al., 2022).
Peaks in the formation time of central in-situ stars in Figure 11 coincide with times of pericenters and coalescence of mergers that this galaxy has experienced. _Thus the formation history of the central in-situ stars is directly connected with the merger history of a galaxy._ However, it has to be further quantified whether also the _bulk_ of in-situ stars is formed during such events or if that actually happens in-between galaxy interactions. Nevertheless, it is clear that a variety of different mergers are able to produce peaks in the formation of central in-situ stars.
For example, the peak between 10 Gyr and 12 Gyr ago was induced by a very minor merger9 with a stellar merger mass ratio of around 0.02. At these high redshifts the primary still had a high gas fraction (\(\sim 80\%\)) and thus the minor merger was enough to trigger a peak of star formation in the center. Evidently, the formation of in-situ stars in the center decreased between 10 Gyr and 8 Gyr ago, as the amount of available gas decreased. Thus, in order to trigger another significant peak in in-situ star formation later on, the merger between 8 Gyr and 7 Gyr had to bring in a large amount of gas. While the ratio of the stellar mass between the secondary and primary was around one (and therefore a major merger) at the time when the secondary reached its maximum stellar mass, the secondary still had around 20 times more gas than the primary.
Footnote 9: We adopt the definition of Rodríguez-Gomez et al. (2016) for the calculation of merger ratios.
Furthermore, this major merger, as well as two smaller ones that coalesced around 6 Gyr ago, happened while the primary galaxy was in the process of falling into another FoF Group, i.e. transitioning from being a central galaxy of its own FoF group to being a satellite galaxy of another FoF group. This is seen by the two sharp jumps in R\({}_{200}\) between 9 Gyr and 7 Gyr ago. Evidently, this process produced another peak of in-situ star formation at around 7.5 Gyr ago, which could stem from the new, higher density environment. We have also seen in other galaxies, that were able to retain enough gas in their centers after infalling into a group, that in-situ star formation was triggered during the pericenter passages around the central until the galaxy became quenched. In such occasions, again tidal forces are able to compress the gas efficiently.
Within TNG50, there are two main processes that can quench the in-situ star formation in the center of galaxies. The first one is the onset of the _kinetic_ AGN feedback mode implemented in TNG. Often this feedback mode switches on after a merger has been completed, as is the case for our galaxy in Figure 11 at around 7 Gyr, shortly after the major merger coalesces. We see that only the central 1 kpc becomes quenched, while the outskirts of the galaxy continue to form stars. This is because in TNG, AGN driven quenching proceeds from inside out (see Weinberger et al., 2017; Nelson et al., 2019, 2021, for details). After this mode is switched on only occasional gas-rich mergers or migrated clumps are able to bring in new gas to the center to cause new in-situ star formation, as seen at 5.5 Gyr in Figure 11. Lastly, the thermal feedback mode, which is often active prior to the kinetic mode switches on and also injects relatively more energy, is not responsible for quenching the centers of galaxies in TNG50 (Zinger et al., 2020).
The second process that will shut down star formation in the center of TNG50 galaxies is when the galaxy as a whole becomes quenched, either through environmental processes, e.g. after a few pericenters after infall into a group (as is the case for the galaxy in Figure 11 around 3 Gyr ago) or through AGN feedback, which is primarily important for the highest mass galaxies (see also Donnari et al., 2021).
#### 5.2.2 Migrated stars
The formation times of'smoothly' migrated stars in Figure 11 is closely related to the formation times of the in-situ stars, which is not the case for the 'clumpy' migrated stars. This is reasonable, because the majority of'smoothly' migrated stars are born already close to the center (\(\lesssim 2\) kpc), while the 'clumpy' migrated stars formed predominantly in the outer disk. The 'clumpy' migrated stars make up 91% of the total mass of migrated stars in the center of this galaxy.
However, the star formation in the central 2 kpc is _not_ a guarantee to produce a significant amount of'smoothly' migrated stars, as seen between lookback times of 9 - 11 Gyr and also between 7 - 7.5 Gyr in Figure 11. Thus, specific conditions must be met that transport stars from around 1 - 2 kpc to the center.
Non-axisymmetric features, such as spiral arms and bars, are well known to be able to diffuse the angular momentum of stars and cause radial migration (e.g. Sellwood and Binney, 2002; Minchev and Famaey, 2010). While this effect is mainly studied in the (outer) disk of galaxies, we show here in Figure 11 that similar non-axisymmetries are likely responsible for the inward migration of stars to center of galaxies. We see that peaks in the A\({}_{2}\) mode (see Appendix A.1 for a definition) of the stellar mass distribution occur _before_ peaks of migrated stars arriving in the galaxy center. We detect similar enhancements for the Fourier modes of the _gas_ mass distribution (see also Di Matteo et al., 2007).
These temporary enhancements of non-axisymmetric features are clearly induced during galaxy interactions and the exerted torques on the gas and stars can produce these'smoothly' migrated stars. This also indicates that it is possible for a galaxy to have experienced migration events of stars, even if the galaxy itself does not exhibit any signs of bar- or spiral-like features today.
The 'clumpy' migrated stars form after the first pericenter passage of the galaxy around its central around 5.5 Gyr ago and arrive at the center shortly before the second pericenter passage around 2 Gyr later. Similarly, we have seen qualitatively for other galaxies that clumps formed rather recently (z \(<\) 1) are mainly induced by fly-bys, as these are still able to destabilize the disk significantly after the predominant merger phase of the Universe is over. However, clumps are also able to form without any significant galaxy interactions and a follow-up study is needed to characterize this further as well as establish overall the credibility of the formation of the clumps (see Section 5.3 for a further discussion).
#### 5.2.3 Ex-situ stars
The two mergers that are responsible for the majority of the ex-situ stars in the center of the galaxy in Figure 11, are the 1:1 and 1:10 merger that coalesced around 7 Gyr and 5.5 Gyr ago respectively. Both mergers brought in a comparable _total_ amount of stellar mass of around \(1.2\times 10^{10}\) M\({}_{\odot}\) and \(8.1\times 10^{9}\) M\({}_{\odot}\) respectively. However, the major merger deposited around 10 times less stars in the central 500 pc compared to the minor merger, i.e. 0.6% and 5% of their respective total stellar mass arrived in the center. This highlights that the merger mass ratio cannot be the only parameter determining the amount of ex-situ stellar mass that is deposited in the center of galaxies. We expect that the spin-orbit coupling of the primary and secondary galaxy as well as other orbital parameters play a role in this, as the exerted tidal forces and the influence of dynamical friction
differ for different configurations (see e.g. Renaud et al., 2009, for a study).
Footnote 1: The \(\mathrm{R_{200}}\) and \(\mathrm{R_{200}}\) distributions are shown in Figure 10.
Around 67% and 93% of the ex-situ stars that arrived from the major and minor merger respectively were formed _after_ both satellite galaxies entered \(\mathrm{R_{200}}\) of the primary's FoF halo around 8.75 Gyr ago. As also all central ex-situ stars were born in the center (\(\sim 500\) pc) of their respective birth galaxies, this confirms that significant nuclear star formation is also triggered in the secondary galaxy after infall.
Most of the ex-situ stars in the center arrive there immediately after the merger coalesces. This is the case if their distance to the center of the primary was less than 500 pc at the time of stripping. Otherwise it can take up to 2 Gyr. Interestingly, the arrival of the 'clumpy' migrated stars at the center around 3.5 Gyr ago induced a second peak in the arrival of ex-situ stars from the minor merger into the center, albeit being ten times lower and hence not visible in Figure 11.
### The case of stellar clumps
Disk fragmentation can occur due to gravitational instabilities in a galaxy's gas-rich and turbulent disk (Toomre, 1964; Springel et al., 2005; Hopkins, 2013). This fragmentation can form highly star forming clumps, which have been reproduced in several studies using hydrodynamical galaxy simulations, either isolated or fully cosmological ones (e.g. Bournaud et al., 2007; Genel et al., 2012; Bournaud et al., 2014; Mandelker et al., 2014, 2017; Buck et al., 2017). The execution of these simulations was motivated by the discovery of the clumpy morphology in the rest-frame UV light of high redshift, star forming galaxies (e.g. Elmegreen et al., 2007; Guo et al., 2015). Therefore, these simulations are tailored to focus on clump formation in massive disk galaxies \(10^{10-11}\,\mathrm{M_{\odot}}\) at \(\mathrm{z}\geq 1\).
In observations, clumps have masses between \(10^{7}\,\mathrm{M_{\odot}}\) and \(10^{9}\,\mathrm{M_{\odot}}\), as well as sizes of 1 kpc or less. Clumps in the simulations are usually identified via regions of enhanced gaseous surface mass density or from mock stellar light images. In TNG50, the identification of clumps is (so far) a passive byproduct of the Subfind algorithm, nevertheless the extracted baryonic mass distribution of the clumps peaks at \(10^{8}\,\mathrm{M_{\odot}}\) exhibiting overall high gas fractions (see Figure 11) are in agreement with the other studies. However, all clumps in TNG50 have 3D baryonic half mass radii below 300 pc and therefore seem to be much more compact compared to observations and some simulation studies (see Figure 9 in Buck et al., 2017). The latter could be a result of the different treatments of star formation and feedback in the simulations or the clump identification, as the numerical resolution of TNG50 is largely comparable to those of the previous studies. Additionally, in TNG50 clumps seem to form continuously throughout cosmic time (see Figure 11), which has not been investigated in other studies of clump formation.
These clumps can migrate to the center of their host galaxies due to dynamical friction, as well as merge with each other while doing so (Bournaud et al., 2007; Dekel & Krumholz, 2013; Bournaud et al., 2014; Mandelker et al., 2014; Dekel et al., 2021). The migration time to the center is found to be of the order of a few hundred Myr, which is similar for clumps in TNG50, where most of the clumps arrive at their respective galaxy's center after \(1-2\) snapshots (\(\sim 200\,\mathrm{Myr}\)) (see Figure 11). This mechanism contributes to the formation of the bulge (e.g. Bournaud et al., 2007; Elmegreen et al., 2007; Dekel et al., 2009). Consequently, the properties of the clumps need to be specific, such that they survive their own internal stellar feedback and the tidal forces on their way to the center with enough stellar mass to significantly contribute to the formation of the bulge. In TNG50, the fraction of clumpy migration stars to the total stellar mass in the center is greater than 40% for about 12% of galaxies with any clumpy migrated stars in their centers (all of those galaxies have stellar masses above \(10^{10.5}\,\mathrm{M_{\odot}}\); see Figure 11). Thus, according to TNG50, the stellar mass transported by the migration of clumps is likely not very important for bulge formation for the majority of high mass galaxies. Nevertheless, the clumps often retain a large amount of gas until the center, or drag gas along (see clump closest to the galaxy's center in the top right of Figure 11), from which stars might form. We have not checked explicitly if this increases the contribution of stellar mass in the center significantly, or if such gas is lost by directly funneling into the SMBH.
In contrast to that, some simulations report clump formation, but then no migration due to almost immediate dissolution or disruption (Hopkins et al., 2012, 2013; Mayer et al., 2016; Oklopcic et al., 2017; Buck et al., 2017). This is likely due to the different simulation set ups, as well as the exact implementation of stellar feedback, or simply because not enough galaxy diversity is probed with isolated or zoom-in galaxy simulations. For example, in Figure 11, we see that galaxies above \(10^{10}\,\mathrm{M_{\odot}}\) in stellar mass start to exhibit more than one clump on average, however significant amounts of clumps that are able to migrate to the center reside in galaxies with stellar masses around \(10^{11}\,\mathrm{M_{\odot}}\) and above. Hence, an investigation of clump formation and migration in a fully cosmological box is necessary to not only capture galaxies of different masses but also different galaxy assembly histories. In TNG50 mergers and other galaxy interactions, such as flybys, can trigger a significant amount of clump formation (although not exclusively; see also Di Matteo et al., 2008; Hopkins et al., 2013; Calabro et al., 2019, for similar reports).
Still, within TNG50 we want to exercise caution when it comes to the trustworthiness of clump formation and their exact properties. Only follow-up zoom-in simulations with higher resolution for different galaxies, as well as different treatment of star forming gas and stellar feedback (see e.g. Hopkins et al., 2013; Smith et al., 2018; Smith, 2021; Smith et al., 2021, for the influence of highly resolved star formation and different stellar feedback schemes in galaxy simulations), will allow for a more robust quantification of clumps in TNG50. Nevertheless, the clump formation in TNG50 is unlikely to be a numerical artifact in its entirety, as the adaptive mesh refinement naturally allows for smaller cell sizes in areas of high gas density.
### The predictive power of TNG50 at small(er) scales
While the TNG modelling framework is extremely successful in reproducing key observational results of galaxy populations, numerical resolution and the implementation of sub-grid physics are insuper-ble limitations of the physical model of galaxy formation. Regarding the former we demonstrate in Figure 11 in Appendix D that the total stellar mass within the central 500 pc of galaxies in TNG50 is converging (see also Pillepich et al., 2019). When splitting the central mass into the contribution of in-situ, migrated and ex-situ stars, the start of convergence is more difficult to assess due to the fixed size of \(\tau_{\mathrm{cut}}\) (see Appendix D for a more detailed discussion) as well as the overall influence of resolution on the amount of accreted stellar mass (which should overall increase with resolution, see also Grand et al., 2021). Thus higher resolution runs are needed to fully determine the amount of convergence.
Higher resolution zoom-in simulations of some TNG50 galaxies - additionally with models variation of stellar feedback and a better resolved cold gas phase of the star forming gas - are certainly interesting and needed to properly evaluate the convergence of the central stellar mass, the formation of stellar clumps and observed nuclear galaxy components. Nevertheless, our study of TNG50 shows that
the cosmological context plays a major role in the assembly of galaxy centers, which is unlikely to become less significant with numerical resolution and other modelling aspects. Already at the resolution of TNG50 it is _rare_ to find a galaxy with _no_ ex-situ stars in its central 500 pc, which is only the case for around 9% of all galaxies in our sample spanning a range between \(5\times 10^{8}-5\times 10^{12}\,\mathrm{M}_{\odot}\). We note that this percentage remains the same, even if we do not impose a constraint on the physical size of galaxies included in our analysis. In fact, our selection of galaxies with half-mass radii \(>2\,\mathrm{kpc}\) does not impose any strong differential effects on our results. We have explicitly checked Figures 4 and 6, which show the same trends when galaxies with \(\mathrm{R}_{1/2}<2\,\mathrm{kpc}\) are included. Thus, we do not expect that any other results of our study are significantly effected by our choice.
Therefore, our findings highlight that the high density, nuclear regions of galaxies can survive tidal forces and contribute to the build-up of the centers of other galaxies, _including_ low-mass galaxies. Additionally, if the two merging galaxies are massive enough to both host a black hole, the merger of the central regions of galaxies will contribute to the growth of SMBHs (see e.g. Schweizer et al., 2018; Voggel et al., 2021, for a recent observational confirmation of such a system).
Currently there are no large-box, cosmological, hydrodynamical simulations (including TNG50) that can resolve smaller central galaxy components, such as nuclear star clusters (NSCs), until z = 0 (see Brown et al., 2018, that investigate NSC formation in a cosmological set-up until z = 1.5). However, by extending the trends in our study to even smaller scales, it is not impossible to think that the formation and evolution of NSCs are also governed by galaxy interactions. Even though the relative fraction of ex-situ stars on tens of pc scales is likely very small for the majority of galaxies, it is clear that the in-situ star formation and the migration of stars to the center is closely connected to the formation pathway of the entire galaxy (see Figure 11), because galaxy interactions are able to create the conditions needed to funnel gas and stars to the center. Therefore, it is important to treat NSC formation in the context of the hierarchical build-up of galaxies in a \(\Lambda\)CDM cosmology (Brown et al., 2018) and consider the influence of galaxy interactions in (semi-)analytical models (Leaman and van de Ven, 2021).
We have explicitly checked within TNG50 if we can make predictions at smaller scales than 500 pc, by repeating our entire analysis for \(\mathrm{r_{cut}}\) of 250 pc (approximately the softening length of TNG50) and 100 pc, as well as 1 kpc for a consistency check. In addition to TNG50-1 (the highest resolution), we repeated this for the two lower resolution realizations, which are TNG50-2 and TNG50-3 respectively. The results are shown in Figure 12 for galaxies between \(10^{10.5}\) and \(10^{11}\,\mathrm{M}_{\odot}\).
With decreasing size of the center (\(\mathrm{r_{cut}}\)) the absolute mass decreases for all three origins. However, the _fraction_ of the in-situ population decreases with decreasing central size, while the migrated fraction increases; a consequence of the smaller volume that is proved. At the same time, this behaviour is also affected by the resolution, which not only sets the absolute normalization of the mass fraction, but also the spatial size at which the relative contribution of the in-situ and migrated fraction swaps. We therefore conclude that a hypothetical higher resolution (TNG50-0) would increase (decrease) the in-situ (migrated) fraction below 250 pc due to the convergence behaviour of the absolute stellar mass at fixed aperture size. Similarly, the contribution of ex-situ stars will increase at a given aperture and also likely reach scales smaller than 500 pc (see Appendix D for more details).
This behaviour emphasizes that the contribution of all three origins will likely remain relevant on scales of 100 pc.
Figure 12: **Effects of numerical resolution and aperture size on the central stellar mass for in-situ, migrated and ex-situ stars in TNG50 galaxies with \(10^{10.5-11}\,\mathrm{M}_{\odot}\) in stellar mass at z = 0.** Lines show the median central stellar mass for in-situ (_pink_), migrated (_orange_) and ex-situ (_blue_) stars for four choices of \(\mathrm{r_{cut}}=0.1,\ 0.25,\ 0.5\) and 1 kpc and three resolution realizations of TNG50 (thicker lines indicate better resolution). TNG50-1 is the highest resolution (flagship), followed by TNG50-2 and -3, which have 2 and 4 times lower spatial resolution. The mass resolution is 8 and 64 times lower respectively and indicated by the dotted horizontal lines. The numbers indicate the respective central stellar mass _fraction_ in percent. Decreasing size of the center means decreasing stellar mass. However the fraction of migrated mass increases, while the in-situ fraction consequentially decreases. At a hypothetical higher resolution (TSG50-0) the latter effect would be lessened as more stellar mass is formed within a given aperture size. Similar trends are recovered for other galaxy masses.
### Galaxy centers as tracers of overall galaxy assembly
Unveiling the merger history of galaxies proves difficult to tackle outside our own Galaxy due to many reasons. Perhaps the most severe one is the fact that accreted material is not necessarily visually apparent in the forms of streams and shells (or any other form of irregularity), especially when the merger coalesced many Gyr ago.
Since deep photometry of galaxies (initially stacked for many galaxies) revealed the need for an additional Sersic component to accurately fit their surface brightness profiles beyond tens of kpc (e.g. Zibetti et al., 2004; Tal & van Dokkum, 2011; D'Souza et al., 2014), the focus of quantifying accreted material has primarily been on the outskirts of galaxies, i.e. their stellar halos (e.g. Monachesi et al., 2016; Merritt et al., 2016; Spavone et al., 2017; Huang et al., 2018; Spavone et al., 2020). It is understood that the excess of light at large galactic radii should mark the transition from the in-situ to ex-situ dominated areas of a galaxy, as (significant) stellar mass can be build-up through minor merging there.
However, the new era of cosmological hydrodynamical simulations suggest that such a transition does not necessarily exist for every galaxy, as especially high mass galaxies can be dominated by ex-situ stars at all radii (Tacchella et al., 2019; Palsoni et al., 2021). Furthermore, Remus & Forbes (2021) showed that the transition in surface brightness profiles traced by two Sersic fits does not correspond to the true in-situ and ex-situ dominated regions. Similarly, changes in kinematic profiles at large radii, which can, for example, be obtained with globular clusters (e.g. Arnold et al., 2014) or planetary nebulae (e.g. Pulsoni et al., 2018), do not, in general, correspond to transitions between in-situ and ex-situ dominated regions (Schulze et al., 2020; Pulsoni et al., 2021).
While detailed studies of stellar halos are certainly important and necessary, our study suggests that there lies potential in using the centers of galaxies to study their accretion history (see Figure 5). Not only are the centers the brightest region of a galaxy and hence deliver the highest quality data, but they are also increasingly covered in numbers by IFU surveys, which provide detailed kinematic and stellar population information (e.g. SAMI: Bryant et al., 2015, 2015). In particular, our results in Figures 9 and 10 show that in-situ and ex-situ stars in the center are (on average) well separated in age-metallicity-circularity space for galaxies with stellar masses \(\leq 10^{10.5}\) M\({}_{\odot}\). Newly developed techniques are able to extract such _distributions_ in ages and metallicity as well as circularities from IFU measurements, and already have been proven to be able to estimate the true underlying accreted stellar material much more realistically (Boecker et al., 2020, 2020; Zhu et al., 2020; Davison et al., 2021; Zhu et al., 2021).
Even though in-situ and ex-situ stars separate better in their stellar population and dynamical properties for these lower mass galaxies, the ex-situ stellar mass fraction in the central 500 pc is _on average_ tenth of percent, thus picking up accreted signatures in the very centers will still be challenging even with these new techniques. However, low redshift IFU observations easily cover \(1-2\) half-light radii, which extend beyond 500 pc and hence should encompass more accreted material. More follow up work will be needed to quantify the optimal extent needed from a galaxy's center to reliably pick up ex-situ fractions in observations.
On top of that, the large spread seen in the central ex-situ mass _at fixed total ex-situ mass_ (see Figure 5) points towards significant spatial variation of ex-situ stars in the host galaxy, regardless of whether the galaxy has accreted a lot of stellar material or not. It is likely that measuring these spatial variations, will inform us about the types of mergers that have happened. Typical characteristics could be the merger ratio, for example major mergers will have the ability to deposit more of their stars in the center of galaxies, but also the gas content or orbital infall parameters. We plan to exploit this in future work.
### Hints for (SDSS-like) observations
TNG50 predicts a diverse mass build-up of galaxy centers. What are the prospects to learn about a galaxy's central in-situ, migrated and ex-situ fraction from more "traditional" observations? For example, from SDSS DR7 (Abazajian et al., 2009), that provides single 3'' fiber spectra for centers of hundred of thousand of galaxies, _average_ ages, metallicities and [\(\alpha\)/Fe] abundances can be determined (Gallazzi et al., 2021). How much information do such measurements contain about the contribution of stellar populations of different origins to a galaxy's center?
In Figure 13 we show the mass-weighted average central age and metallicity for our sample of TNG50 galaxies color-coded by the mass fraction attributed to each origin (LOESS smoothed; Cappellari et al., 2013).
If the measured average central age and metallicity of a galaxy lies on the respective mass-age and mass-metallicity relation, the galaxy has likely a high fraction of in-situ stars in its center, except if its larger than \(10^{11}\) M\({}_{\odot}\) in stellar mass. If the measured average metallicity is _below_ the 16th percentile at fixed stellar mass, it is more likely that the galaxy's center is dominated by ex-situ stars. Similarly, if the galaxy has an average age below the 16th percentile, it has likely a high amount of migrated stars in its center. High mass galaxies above \(10^{11}\) M\({}_{\odot}\) with significant amounts of ex-situ stars in their centers, also have a slightly younger age (between the 16th percentile and the median) than the typical average galaxy in that mass regime.
Naturally, a proper mocking of observed average stellar population properties from TNG50 is needed to compare accurately to measurements from Gallazzi et al. (2021). However, Figure 13 seems to acknowledge that such measurements provide some leverage in determining the fraction of stars with different origins in the centers of galaxies.
With respect to comparisons to the whole SDSS galaxy sample, it would be necessary to repeat the analysis of this paper for different spatial apertures, The fixed 3'' diameter of the SDSS fibers will already encompass larger physical sizes than 1 kpc for galaxies with z \(>0.02\). It would be interesting to understand how the relative contribution of stars from the different origins change with greater spatial extent, especially for the ex-situ stars.
## 6 Summary and conclusions
Galaxies growth hierarchically in a \(\Lambda\)CDM universe. Their centers are the regions where usually the highest quality observations are available. What information about the hierarchical growth of galaxy formation is encoded in this observationally favourable region? To answer this, we investigated the central 500 pc mass assembly of galaxies ranging from \(5\times 10^{8}\) M\({}_{\odot}\) to \(5\times 10^{12}\) M\({}_{\odot}\) with half-mass radii \(>2\) kpc in the TNG50 simulation.
Stars that are found at the center of TNG50 galaxies at z = 0 originate from one of the three possibilities: in-situ (formed inside the center), migrated (formed inside the host galaxy, but outside the center), ex-situ (brought in by mergers). Stars can migrate to the center either as as continuous distribution of individuals (smooth) or in clumps.
For each origin we characterized their radius with respect to their host galaxy at birth to understand the travelling distances for the migrated stars as well as the spatial environment of ex-situ stars at the time of birth and deposit into the z = 0 host.
We then investigated the amount of the central stellar mass contained in each of three origins and their relative contribution as well as their correlation to each other across the entire TNG50 galaxy mass range. Additionally, we studied differences in central in-situ, migrated and ex-situ for different galaxy types at z = 0.
To address whether the different origins of central stars leave a discernible imprint on their (observable) features, we characterized and correlated their ages, metallicities, [Mg/Fe] abundances as well as dynamical properties with their distributions in circularity as a function of the galaxy's total stellar mass. We summarize our most important findings below:
* In-situ stars are on average the dominant component in stellar mass in the central 500 pc of TNG50 galaxies across the entire mass range of \(5\times 10^{8-12}\) M\({}_{\odot}\). Migrated stars contribute on average 20% to the total stellar mass in the center, where below (above) \(5\times 10^{10}\) M\({}_{\odot}\) in galaxy stellar mass smoothly (clumpy) migrated stars encompass their majority. The central stellar mass fraction of ex-situ stars becomes _on average_ non negligible above galaxy masses of \(5\times 10^{10}\) M\({}_{\odot}\) with a large scatter of up to 80%. However, it is the _exception_ to find a galaxy without _any_ ex-situ stellar mass in its central 500 pc, which is only the case for about 9% of galaxies in our total sample. (Figure 4)
* The majority of smoothly migrated stars originate close to the center (radii between 500 pc and 1 kpc), whereas \(\sim 15\%\) come from larger distances up until 10 kpc. Compared to that, clumpy migrated stars possess a distinctively different distribution of birth radii which peaks around \(20-30\) kpc for galaxies with stellar masses greater than \(5\times 10^{10}\) M\({}_{\odot}\). Most of the ex-situ stars originate in the central 1 kpc of their birth galaxies, where they remain until they are deposited inside their z = 0 host galaxy. (Figure 3)
* _At fixed galaxy stellar mass_ the amount of central ex-situ stellar mass exhibits a significant scatter between \(4-6\) dex, reflecting the stochasticity of the merger history of individual galaxies. In some cases, close to the _entire_ total amount of ex-situ stellar material ever deposited inside the host galaxy resides within the central 500 pc. (Figure 5)
* In TNG at z = 0, star forming galaxies with stellar masses above \(10^{10}\) M\({}_{\odot}\) have on average _larger_ ex-situ central stellar masses than their quenched counterparts. Only quenched galaxies that are additionally bulgey and have no bar signature show a rise of central ex-situ stellar mass above \(10^{10}\) M\({}_{\odot}\) similar to the star forming galaxies. Galaxies between \(5\times 10^{9}-5\times 10^{10}\) M\({}_{\odot}\) with an overmassive (undermassive) SMBH in the center are more compact (extended) and show on average a higher (lower) in-situ and migrated central stellar mass. There is _no_ difference in neither in-situ, migrated nor ex-situ central stellar masses for central or satellite galaxies. (Figure 6)
* Central ex-situ stars have on average the lowest metallicities, the oldest ages and the highest [Mg/Fe] abundances. The slope of their mass-metallicity relation is slightly steeper than that of the in-situ and migrated stars, and their mass-age relation is flat compared to the positive correlation between central age and galaxy stellar mass for the in-situ and migrated stars. Overall, the average stellar populations
Figure 13: **Information about the central (500 pc) fractional mass associated with in-situ, migrated and ex-situ stars contained in average age and metallicity measurements from TNG50 galaxies at z = 0.**_Top row_: Mass-weighted average metallicities for _all_ central stars as a function of the galaxy’s total stellar mass, but color-coded in each panel (_from left to right_) according to their fraction of in-situ, migrated and ex-situ stars. The colors are LOESS (Cappellari et al., 2013) smoothed to show the average around neighbouring galaxies. The thick dashed line shows the median relation in each panel, and the thin dotted lines show the 16th and 84th percentile respectively. _Bottom row_: The same but for the mass-weighted average age. Galaxies that are more metal-poor than the 16th percentile for their corresponding stellar mass are more likely to have high ex-situ fractions in their centers, while galaxies with high central migrated fractions are younger than the 16th percentile.
properties of in-situ and migrated stars are very similar, with in-situ stars being slightly more metal-rich and older. (Figure 7)
* The majority of central stars for galaxies with stellar masses below \(10^{9}\,\mathrm{M}_{\odot}\) and above \(10^{11}\,\mathrm{M}_{\odot}\) regardless of their origin are on random motion dominated orbits. For galaxies in between those stellar masses, the peak of the circularity distribution shifts by 0.5 (0.25) towards rotationally supported orbits for migrated (in-situ) stars _for both_ disk and bulge dominated galaxies, whereas ex-situ stars remain random motion dominated at all galaxy masses. (Figure 8)
* For star forming galaxies around \(10^{10}\,\mathrm{M}_{\odot}\) in stellar mass, in-situ, migrated and ex-situ stars clearly separate in age-metallicity space, while the distinction becomes less clear for star forming galaxies outside that mass range and quenched galaxies in general. (Figure 9)
* For both disk and bulge dominated galaxies between \(10^{9.5-10}\,\mathrm{M}_{\odot}\) in stellar mass, in-situ, migrated and ex-situ stars clearly separate in age-circularity space. The migrated stars are the youngest with the highest amount of rotational support and the ex-situ stars are the oldest and purely random motion supported, whereas the in-situ stars are situated in between. (Figure 10)
Furthermore, we have demonstrated the diversity of the central 500 pc of galaxies as governed by the hierarchical mass build-up in a \(\Lambda\)CDM universe. Galaxy interactions are an important driver in not only contributing ex-situ stars to the center of galaxies, but also in dictating the formation of in-situ stars and the migration of stars to the center. This leads to an entanglement of different mechanisms that influence the formation history of stars in the center of galaxies. In Figure 11 we have qualitatively identified these mechanisms that are present in TNG50, which includes episodes of in-situ star formation and stellar migration to the center during times of pericenter passages and/or coalescence of mergers or flybys, infall into galaxy groups/clusters as well as depletion of the central gas reservoir through kinetic AGN feedback and environmental effects.
In the future, higher resolution simulations (not only spatially but also concerning star formation and stellar feedback prescriptions) will be needed to fully address the formation and migration of stellar clumps and to study the formation of nuclear galaxy structures, such as nuclear disks and star clusters, in a fully cosmological context.
Bright galaxy centers have the potential to be used in observations as tracers of the overall galaxy assembly history. TNG50 predicts distinct stellar populations and dynamical properties for the stars of different origins in the center of galaxies, which can be observed with today's IFU capabilities. Figure 13 demonstrates that there is even promise to deduce the fractional contribution of central in-situ, migrated and ex-situ stars from SDSS-like observations in a galaxy population averaged sense.
In summary, TNG50 is a tremendous advancement in predicting the stellar build-up of sub-kpc scales in a fully cosmological context. Its predictive power is valuable to consider new pathways in modelling formation scenarios of central stellar components as well as to push forward novel observational techniques to unveil the formation history of galaxies.
## Data Availability
The IllustrisTNG simulations, including TNG50, are publicly available and accessible at www.tng-project.org/data(Nelson et al., 2019). Data directly related to this publication and its figures are available upon reasonable request from the corresponding author.
## Acknowledgements
AB is grateful to Ignacio Martin-Navarro and Jesus Falcon-Barroso for their support and scientific discussion throughout this project. AB also likes to thank Glenn van de Ven and Francisco Aros for useful discussions. We thank the anonymous referee for helpful comments on the manuscript. This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 138713538 - SFB 881 ("The Milky Way System", subproject B08). NF was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC), [funding reference number CITA 490888-16] through the CITA postdoctoral fellowship and acknowledges partial support from a Arts & Sciences Postdoctoral Fellowship at the University of Toronto. RR acknowledges funding from the Deutsche Forschungsgemeinschaft (DFG) through an Emmy Noether Research Group (grant number NE 2441/1-1). The IllustrisTNG simulations were undertaken with compute time awarded by the Gauss Centre for Supercomputing (GCS) under GCS Large-Scale Projects GCS-ILLU and GCS-DWAR on the GCS share of the supercomputer Hazel Hen at the High Performance Computing Center Stuttgart (HLRS), as well as on the machines of the Max Planck Computing and Data Facility (MPCDF) in Garching, Germany. The computations for this work were performed on the ISAAC cluster of the Max Planck Institute for Astronomy at the Rechenzentrum in Garching.
| 銀河系の中心の星形成に関わる起源について、その星の質量が$5\times10^{8-12}\,\mathrm{M}_{\odot}$である星を持つ銀河を、$\mathrm{z=0}$で、宇宙論的磁気流体力学のTNG50シミュレーションを用いて、内側の$500\,\mathrm{pc}$で調査します。銀河の中心に存在する星形成の起源は、以下3つあります。1) その場で生まれた星(中心で生まれた星)、2) 移動した星(他の銀河で生まれた星で、最終的に中心に移動した星)、3) 外出された星(他の銀河から accretion した星)。その場で生まれた星と移動した星は、平均的に中心の星質量の73%と23%を占めています。外出された星は、銀河の質量が10¹¹M⊙を超える銀河で1% |
2305.12764 | Global Symmetries and Effective Potential of 2HDM in Orbit Space | We extend the framework of analyzing the 2HDM in its orbit space to study the
one-loop effective potential before and after electroweak symmetry breaking. In
this framework, we present a comprehensive analysis of global symmetries of the
one-loop thermal effective potential in the 2HDM, demonstrating when the global
symmetries of the tree-level 2HDM potential are broken by loop contributions.
By introducing light-cone coordinates and generalizing the bilinear notation
around the vacuum, we present a geometric view of the scalar mass matrix and
on-shell renormalization conditions. | Qing-Hong Cao, Kun Cheng, Changlong Xu | 2023-05-22T06:35:20 | http://arxiv.org/abs/2305.12764v2 | # Global Symmetries and Effective Potential of 2HDM
###### Abstract
We extend the framework of analyzing the 2HDM in its orbit space to study the one-loop effective potential before and after electroweak symmetry breaking. In this framework, we present a comprehensive analysis of global symmetries of the one-loop thermal effective potential in the 2HDM, demonstrating when the global symmetries of the tree-level 2HDM potential are broken by loop contributions. By introducing light-cone coordinates and generalizing the bilinear notation around the vacuum, we present a geometric view of the scalar mass matrix and on-shell renormalization conditions.
Introduction
The Two-Higgs-Doublet Model (2HDM) is a simple extension of the SM [1]. It has received much attention for its potential to provide new sources of CP violation and strong first-order phase transition [2; 3; 4; 5; 6; 7; 8; 9; 10; 11]. The most general tree-level 2HDM scalar potential
\[V= m_{11}^{2}\Phi_{1}^{\dagger}\Phi_{1}+m_{22}^{2}\Phi_{2}^{\dagger }\Phi_{2}-\left(m_{12}^{2}\Phi_{1}^{\dagger}\Phi_{2}+h.c.\right) \tag{1}\] \[+\frac{1}{2}\lambda_{1}\left(\Phi_{1}^{\dagger}\Phi_{1}\right)^{ 2}+\frac{1}{2}\lambda_{2}\left(\Phi_{2}^{\dagger}\Phi_{2}\right)^{2}+\lambda_ {3}(\Phi_{2}^{\dagger}\Phi_{2})(\Phi_{1}^{\dagger}\Phi_{1})+\lambda_{4}(\Phi_ {1}^{\dagger}\Phi_{2})(\Phi_{2}^{\dagger}\Phi_{1})\] \[+\left(\frac{1}{2}\lambda_{5}\left(\Phi_{1}^{\dagger}\Phi_{2} \right)^{2}+\lambda_{6}(\Phi_{1}^{\dagger}\Phi_{1})(\Phi_{1}^{\dagger}\Phi_{2} )+\lambda_{7}(\Phi_{2}^{\dagger}\Phi_{2})(\Phi_{1}^{\dagger}\Phi_{2})+h.c.\right)\]
is parameterized by 14 real parameters. Here, \((m_{12}^{2},\lambda_{5},\lambda_{6},\lambda_{7})\) are in general complex while the others are real.
The CP conserving 2HDM, also called real 2HDM, require all the parameters in Eq. (1) to be real with respective to a \(U(2)_{\Phi}\) basis transformation \(\Phi_{i}^{\prime}=U_{ij}\Phi_{j}\). Due to the field redefinition, the CP symmetry and other global symmetries of the potential are hard to determine from the parameters in Eq. (1) directly, and one of the most efficient ways to analyze these symmetries is to use the bilinear notation [12; 13; 14; 15] of the 2HDM. This method involves expressing the tree-level 2HDM potential in terms of orbits of \(SU(2)_{L}\) gauge transformations, which can be combined to form a four-vector,
\[(K_{0},\vec{K})=K^{\mu}=\Phi_{i}^{\dagger}\sigma_{ij}^{\mu}\Phi_{j},\quad(\mu= 0,1,2,3). \tag{2}\]
In this notation, the \(U(2)_{\Phi}\) basis transformation of the Higgs doublets corresponds to a \(SO(3)_{K}\) rotation of the three space-like components of this four-vector, while CP transformations correspond to improper rotations in these three dimensions [11].
The bilinear notation serves as a convenient tool for examining the symmetries and vacuum conditions of 2HDM. However, its applications are usually restricted to tree-level potential and global structures. In this work, we establish a complete framework for discussing the 2HDM potential, by extending the bilinear notation of the 2HDM to address the properties of physical fields around the vacuum and one-loop effective potential including renormalization. Recently, it is shown in Ref. [16; 17] that the bilinear notation can be extended to Yukawa couplings, making it possible to express the 2HDM effective potential including fermion loop contributions in the bilinear notation [16]. With this approach, we express the effective potential entirely as a function of gauge orbits, and systematically analyze the
possible global symmetries of the effective potential. We generalize the bilinear notation to discuss physical fields after electroweak symmetry breaking (EWSB), and provide a geometrical description of scalar masses based on the light-cone coordinates in the orbit space. We demonstrate that the scalar mass matrix can be viewed as a distance matrix between two hyper-surfaces in the orbit space. Additionally, we translate the renormalization conditions in the field space into a set of geometrical conditions in the orbit space. Then numerous redundant renormalization conditions [18] that depend on the selection of background fields can be avoided. After the on-shell renormalization, we give a comprehensive effective field theory description of one-loop 2HDM effective potential for the first time.
In the rest of this paper, we first review the global symmetries of the tree-level 2HDM potential in the bilinear notation in Section II, and then examine whether these symmetries are preserved by one-loop corrections in Section III. We explore the relationship between the orbit space and the field space around the vacuum after EWSB in Section IV, and we demonstrate how to carry out the on-shell renormalization in the orbit space in Section V. Finally, we conclude in Section VI.
## II Basis invariant description of global symmetry
In this section, we introduce the basis transformations and CP transformations of the Higgs doublets, and the global symmetries of 2HDM potential. If a 2HDM potential is invariant under some basis or CP transformations, then it processes the corresponding symmetries. The bilinear notation is convenient to discuss these global transformations, because the basis or CP transformations simply corresponds to proper or improper rotations in the 3-dimensional space-like part of the orbit space [10; 11; 13; 14], and we refer this 3-dimensional subspace as \(\vec{K}\) space in the following.
### Global transformations and symmetries in the bilinear notation
We first consider the \(U(2)_{\Phi}\) basis transformations \(\Phi_{i}\to U_{ij}\Phi_{j}\). It is straightforward to see from Eq. (2) that an \(SU(2)_{\Phi}\) basis transformation corresponds to a rotation in the \(\vec{K}\)-space,
\[K_{0}\to K_{0},\ K_{a}\to R_{ab}(U)K_{b},\quad R_{ab}(U)=\frac{1}{2}{\rm tr} \left[U^{\dagger}\sigma_{a}U\sigma_{b}\right],\ a,b=1,2,3. \tag{3}\]
Then we consider the CP transformation \(\Phi_{i}(t,\vec{x})\to\Phi_{i}^{*}(t,-\vec{x})\). Because the definition of the standard CP transformation \(\Phi_{i}\to\Phi_{i}^{*}\) will be changed if we choose another set of basis to describe the scalar fields, e.g. \(\Phi_{i}^{\prime}=U_{ij}\Phi_{j}\), the CP transformations in the 2HDM are extended as [2; 4; 19; 20]
\[\text{GCP}:\ \ \Phi_{i}\to X_{ij}\Phi_{j}^{*}. \tag{4}\]
Here, \(X_{ij}\) is an arbitrary unitary matrix, and such CP transformations are called generalized CP (GCP) transformations. By plugging the GCP transformation into Eq. (2), we find that \(\vec{K}\) transforms in an improper \(O(3)_{K}\) rotation \(\bar{R}(X)\).
\[K_{0}\to K_{0},\ K_{a}\to\bar{R}_{ab}(X)K_{b},\quad\bar{R}(X)\equiv R(X)\, \text{diag}(1,-1,1). \tag{5}\]
Here, the \(R(X)\) is defined in Eq. (3). Besides, for any GCP transformation, one can always find a basis \(\Phi_{i}\) so that \(X_{ij}\) is a real rotation matrix [19]. Therefore GCP transformations are often classified into three cases [21; 22]:
\[\text{CP1}:\Phi_{1}\to\Phi_{1}^{*},\ \Phi_{2}\to\Phi_{2}^{*}, \tag{6}\] \[\text{CP2}:\Phi_{1}\to\Phi_{2}^{*},\ \Phi_{2}\to-\Phi_{1}^{*},\] (7) \[\text{CP3}:\ \begin{cases}\Phi_{1}\to\Phi_{1}^{*}\cos\theta+\Phi_{2}^{* }\sin\theta\\ \Phi_{2}\to-\Phi_{1}^{*}\sin\theta+\Phi_{2}^{*}\cos\theta\end{cases},\quad 0< \theta<\pi/2, \tag{8}\]
where CP3 is the most general CP transformation while CP1 and CP2 are some special case with \(\text{CP1}^{2}=\text{CP2}^{2}=\mathbb{1}\), with respect to a global sign.
After showing that the basis and GCP transformations correspond to \(O(3)_{K}\) rotations in the \(\vec{K}\)-space, we examine the symmetries conserving conditions on the 2HDM potential. The 2HDM potential in Eq. (1) can be written as a function of gauge orbits [12; 13; 14; 15],
\[\begin{split} V&=\xi_{\mu}K^{\mu}+\eta_{\mu\nu}K^{ \mu}K^{\nu}\\ &=\xi_{0}K_{0}+\eta_{00}K_{0}^{2}+\vec{\xi}\cdot\vec{K}+2K_{0} \vec{\eta}\cdot\vec{K}+\vec{K}^{T}E\vec{K}.\end{split} \tag{9}\]
Here, \(\vec{\xi}\) parametrizes the scalar quadratic couplings while \(E\) and \(\vec{\eta}\) parametrize the scalar quartic couplings. As discussed above, a GCP or basis transformation corresponds to some (improper) rotation \(R\) in the \(\vec{K}\)-space. If a tree-level potential is invariant under a rotation \(R\), i.e., \(V(K_{0},\vec{K})=V(K_{0},R\vec{K})\), its parameters should be invariant under the rotation \(R\),
\[\vec{\xi}=R\vec{\xi},\quad\vec{\eta}=R\vec{\eta},\quad E=RER^{T}. \tag{10}\]
A complete analysis of the symmetries of the 2HDM Lagrangian must include the Yukawa interaction, which reads as
\[-\mathcal{L}_{\rm Yuk}=\bar{Q}_{L}y_{u,i}\tilde{\Phi}_{i}u_{R}+\bar{Q}_{L}y_{d,i }\Phi_{i}d_{R}+h.c.\,\quad i=1,2, \tag{11}\]
for one generation of \(u\)-quark and \(d\)-quark. Note that \(y_{d,i}\) and \(y_{u,i}^{*}\) transform like \(\Phi_{i}\) under the \(SU(2)_{\Phi}\) basis redefinition. Although the Yukawa coupling terms in the Lagrangian cannot be expressed in bilinear notation directly, it is shown that the bilinear notation can still be extended to discuss whether the Yukawa couplings break the global symmetries of the scalar potential [16]. This is done by projecting the Yukawa couplings into orbit space, and defining covariant vectors in the dual space of \(K^{\mu}\) in terms of the Yukawa couplings as
\[Y_{u}^{\mu}=y_{u,i}^{*}\sigma_{ij}^{\mu}y_{u,j},\quad Y_{d}^{\mu}=y_{d,i} \sigma_{ij}^{\mu}y_{d,j}^{*}. \tag{12}\]
In order to make sure that \(\mathcal{L}_{\rm Yuk}\) is invariant under the basis transformation \(\Phi_{i}\to U_{ij}\Phi_{j}\) or the CP transformation in the scalar sector \(\Phi_{i}\to X_{ij}\Phi_{j}^{*}\), the vector \(\vec{Y}\) projected by the Yukawa couplings should satisfy
\[\vec{Y}=R(U)\vec{Y}\quad\text{or}\quad\vec{Y}=\bar{R}(X)\vec{Y}. \tag{13}\]
### Examples of global symmetries
Next we show how to discuss some special symmetries that is widely considered in the orbit space. We start with two characteristic examples, the CP1 symmetry and the \(Z_{2}\) symmetry. The CP symmetry is often introduced to the potential because large CP violations are prohibited by experiments. From Eq. (5), the CP1 transformation \(\Phi_{i}\to\Phi_{i}^{*}\) corresponds to a mirror reflection in the \(\vec{K}\)-space. The \(Z_{2}\) symmetry is introduced to prevent the flavor changing neutral interactions,1 and a softly broken \(Z_{2}\) symmetry is often considered. From Eq. 3, the \(Z_{2}\) transformation \(\Phi_{1}\to-\Phi_{1}\) corresponds to a 2-dimensional rotation of \(\pi\) in the \(\vec{K}\) space.
Footnote 1: For different types of \(Z_{2}\) charge assignments in the orbit space, see Table II in Appendix A.2.
Whether a 2HDM is invariant under the CP1 or \(Z_{2}\) transformation can be understood from the geometrical profile of parameter tensor and vectors, as shown in Eqs. (10) and (13). Without loss of generality, we use an ellipsoid to visualize the \(3\times
\(E\) which possesses at least three \(C_{2}\) axis (principal axis) and three symmetry planes, and we illustrate these two examples in Fig. 1. The CP1 symmetric 2HDM potential satisfies a mirror reflection symmetry in the \(\vec{K}\) space, requiring all the parameter vectors to lie on the same reflection plane of \(E\). The \(Z_{2}\) symmetric potential is invariant under a rotation of \(\pi\) in the \(\vec{K}\) space. Hence the parameter vectors should point to the same principal axis of \(E\). As for the softly broken \(Z_{2}\) symmetry, the quadratic term \(\vec{\xi}\) is allowed to break the \(Z_{2}\) symmetry as in Fig. 1.
Following Refs. [21; 22], we list other global symmetries in scalar family space by different geometric profile of the scalar potential in Table 1. The \(U(1)_{a}\) transformation \(\Phi_{1}\to e^{i\theta}\Phi_{1}\) corresponds to a rotation along a certain axis, the CP2 transformation corresponds to a point reflection, and the CP3 transformation corresponds to a point reflection followed by an additional rotation of \(0\sim\pi\). The geometric profiles show the hierarchy chain of those global symmetries clearly,
\[\text{CP1}<Z_{2}<\left\{\begin{aligned} &\text{CP2}\\ & U(1)_{a}\end{aligned}\right\}<\text{CP3}<U(2), \tag{14}\]
i.e., a \(Z_{2}\) symmetric tree-level 2HDM scalar potential must satisfy CP1 symmetry and likewise. For GCP properties of tree-level 2HDM scalar potential, CP2 and CP3 symmetric conditions are more strict than CP1.2 Besides, neither CP2 nor CP3 symmetry can be still
Figure 1: Illustration figure of parameter vectors and tensor of the 2HDM potential that obeys CP1 symmetry (left) or softly broken \(Z_{2}\) symmetry (right). The ellipsoid denotes the tensor \(E\) and black dashed lines denote its three principal axes. Red and blue arrows denote the directions of \(\vec{\eta}\) and \(\vec{\xi}\) respectively.
preserved after the Higgs field developed a non-vanishing vacuum expectation value. Therefore, we will only discuss CP1 conserving (CPC) conditions and denote CP1 as CP in the following.
Footnote 1: The CP-violating vacuum expectation value is defined as \(\Phi_{i}=\Phi_{i}+\Phi_{i}\), where \(\Phi_{i}\) is the CP-violating vacuum expectation value.
## III Effective potential and thermal correction
The global symmetries of thermal effective potential are important in the study of the vacuum structure and CP violation in the early universe. The use of bilinear notation simplifies, from a geometrical perspective, the analysis of the global symmetries of the effective potential. In this section, we employ the bilinear notation to evaluate the effective potential and discuss its global symmetries.
The thermal effective potential of the 2HDM is written as
\[V_{\text{eff}}(T)=V_{\text{tree}}+V_{\text{CW}}+V_{T}+V_{\text{ daisy}}, \tag{15}\]
where \(V_{\text{tree}}\) is the tree-level potential, \(V_{\text{CW}}\) is the one-loop Coleman-Weinberg potential at zero temperature [28], and \(V_{T}+V_{\text{daisy}}\) are the thermal corrections at finite temperature. Using the background field method, the one-loop Coleman-Weinberg potential calculated in
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Symmetry & Transformation & Vector \(\vec{\eta}\), \(\vec{\xi}\) and \(\vec{Y}\) & Tensor \\ \hline U(2) & \(\Phi_{i}\to U_{ij}\Phi_{j}\) & 0 & spherical \\ \hline CP3 & Eq. (8) & 0 & \(e_{1}=e_{2}\) \\ \hline CP2 & Eq. (7) & 0 & - \\ \hline \(U(1)_{a}\) & \(\Phi_{1}\to e^{i\theta}\Phi_{1}\) & collinear with \(\vec{e}_{3}\) & \(e_{1}=e_{2}\) \\ \hline \(Z_{2}\) & \(\Phi_{1,2}\rightarrow\pm\Phi_{1,2}\) & collinear with an axes \(\vec{e}_{i}\) & - \\ \hline CP1 & \(\Phi_{i}\rightarrow\Phi_{i}^{*}\) & orthogonal to an axes \(\vec{e}_{i}\) & - \\ \hline \end{tabular}
\end{table}
Table 1: Global symmetry and their geometric profile of parameter vectors and tensor \(E\). \(\vec{e}_{i}\) are direction of three eigenvectors of \(E\), and \(e_{i}\) denotes the three corresponding eigenvalues.
Landau gauge under the \(\overline{\text{MS}}\) scheme is
\[\begin{split} V_{\text{CW}}(\phi_{c})&=\frac{1}{2} \text{Tr}\int\frac{d^{4}p_{E}}{2\pi^{4}}\ln\left[p_{E}^{2}+\textbf{M}^{2}(\phi_ {c})\right]\\ &=\frac{1}{64\pi^{2}}\sum_{i}n_{i}m_{i}^{4}(\phi_{c})\left[\ln \frac{m_{i}^{2}(\phi_{c})}{\mu^{2}}-c_{i}\right].\end{split} \tag{16}\]
Here, \(p_{E}=(-ip^{0},\vec{p})\), \(\textbf{M}^{2}\) is the mass matrix of scalar or fermion in the loop and \(\textbf{Tr}\) traces over the dimension of mass matrix, \(m_{i}^{2}\) is the eigenvalue of the \(\textbf{M}^{2}\) for the field \(i\), and \(n_{i}\) is the degree of freedom of the field \(i\). The constant \(c_{i}\) equals to \(5/6\) for gauge bosons and \(3/2\) for others.
The effective potential of the 2HDM has been extensively studied in the literature [6; 7; 8; 9]. Typically, only the neutral or CP-even components of the Higgs boson doublets are treated as background fields, which breaks the \(SU(2)_{L}\) invariance explicitly. Consequently, the bilinear notation cannot be applied to study \(V_{\text{eff}}(\phi_{c})\) directly. In order to analyze the global symmetries of the effective potential using bilinear notation, a global \(SU(2)_{L}\) invariance must be preserved in the calculation [16], which means that the masses in Eq. (16) need to be evaluated in a \(SU(2)_{L}\) invariant way. To achieve this, we treat all the components of the Higgs boson doublets \(\Phi_{i}\)'s,
\[\Phi_{i}=\begin{pmatrix}\phi_{i\uparrow}\\ \phi_{i\downarrow}\end{pmatrix},\quad i=1,2, \tag{17}\]
as background fields, and \(K^{\mu}\) should be understood as bilinear forms of background fields in this section.
### Symmetries of Coleman-Weinberg potential
We first consider the zero temperature effective potential by calculating the contributions from gauge boson loop, fermion loop and scalar loop to the Coleman-Weinberg potential respectively.
Contributions from gauge boson loop.The masses of gauge bosons arise from the kinetic term \(|D_{\mu}\Phi_{i}|^{2}\) with \(D^{\mu}\Phi_{i}=(\partial^{\mu}+i\frac{g}{2}\sigma_{a}W_{a}^{\mu}+i\frac{g^{ \prime}}{2}B^{\mu})\Phi_{i}\), where \(i=1,2\). Expanding the
covariant derivative term directly yields the gauge boson mass term
\[\begin{split}\frac{1}{4}\Phi_{i}^{\dagger}(g\sigma_{a}W_{a}+g^{ \prime}B)^{2}\Phi_{i}=&\frac{1}{4}\Phi_{i}^{\dagger}(g^{\prime 2}B^{2}+2gg^{ \prime}BW_{a}\sigma_{a}+g^{2}\sigma_{a}\sigma_{b}W_{a}W_{b})\Phi_{i}\\ =&\frac{1}{4}\Phi_{i}^{\dagger}(g^{\prime 2}B^{2}+2gg^{ \prime}BW_{a}\sigma_{a}+g^{2}\sigma_{\{a}\sigma_{b\}}W_{a}W_{b})\Phi_{i}\\ =&\frac{\Phi_{i}^{\dagger}\Phi_{i}}{4}(g^{\prime 2}B^{2}+g^{ 2}W_{a}W_{a})+\frac{\Phi_{i}^{\dagger}\sigma_{a}\Phi_{i}}{2}gg^{\prime}BW_{a}. \end{split} \tag{18}\]
Then the gauge boson mass matrix in basis \(\vec{G}=(W_{1},W_{2},W_{3},B)\) is
\[\mathbf{M}_{G}^{2}(\Phi_{i})=\frac{\partial^{2}L}{\partial\vec{G}\partial\vec {G}}=\frac{g^{2}}{4}\begin{pmatrix}\Phi_{i}^{\dagger}\Phi_{i}&0&0&t_{W}\Phi_{i }^{\dagger}\sigma_{1}\Phi_{i}\\ 0&\Phi_{i}^{\dagger}\Phi_{i}&0&t_{W}\Phi_{i}^{\dagger}\sigma_{2}\Phi_{i}\\ 0&0&\Phi_{i}^{\dagger}\Phi_{i}&t_{W}\Phi_{i}^{\dagger}\sigma_{3}\Phi_{i}\\ t_{W}\Phi_{i}^{\dagger}\sigma_{1}\Phi_{i}&t_{W}\Phi_{i}^{\dagger}\sigma_{2} \Phi_{i}&t_{W}\Phi_{i}^{\dagger}\sigma_{3}\Phi_{i}&t_{W}^{2}\Phi_{i}^{\dagger} \Phi_{i}\end{pmatrix} \tag{19}\]
where \(t_{W}=\tan\theta_{W}=g^{\prime}/g\). For a matrix with the shape of Eq. (19), its eigenvalues are
\[\text{Eigenvalues}\begin{pmatrix}e&&a\\ &e&&b\\ &&e&c\\ a&b&c&d\end{pmatrix}=(e,e,\frac{d+e\pm\sqrt{4(a^{2}+b^{2}+c^{2})+(d-e)^{2}}}{ 2}). \tag{20}\]
With the help of the Fierz identities,
\[(\Phi_{i}^{\dagger}\sigma_{a}\Phi_{i})(\Phi_{j}^{\dagger}\sigma_{a}\Phi_{j})=( \Phi_{1}^{\dagger}\Phi_{1}-\Phi_{2}^{\dagger}\Phi_{2})^{2}+4(\Phi_{1}^{\dagger }\Phi_{2})(\Phi_{2}^{\dagger}\Phi_{1})=|\vec{K}|^{2}, \tag{21}\]
we present four eigenvalues of the gauge boson mass matrix,
\[\begin{split} m_{W^{\pm}}^{2}&=\frac{g^{2}}{4}K_{0},\\ m_{Z}^{2}&=\frac{g^{2}}{8}\left((1+t_{W}^{2})K_{0}+ \sqrt{4t_{W}^{2}|\vec{K}|^{2}+(t_{W}^{2}-1)^{2}K_{0}^{2}}\right),\\ m_{\gamma}^{2}&=\frac{g^{2}}{8}\left((1+t_{W}^{2})K_{0}- \sqrt{4t_{W}^{2}|\vec{K}|^{2}+(t_{W}^{2}-1)^{2}K_{0}^{2}}\right).\end{split} \tag{22}\]
Notice that there is a massless photon when the vacuum is neutral, i.e., \(K_{0}=|\vec{K}|\). By plugging Eq. (22) into Eq. (16), we find that the gauge boson loop contributions to the Coleman-Weinberg potential, \(V_{\text{CW}}^{(G)}=V_{\text{CW}}^{(G)}(K_{0},|\vec{K}|)\), is spherically symmetric and preserve any rotational symmetry in the \(\vec{K}\) space, i.e.,
\[V_{\text{CW}}^{(G)}(K_{0},\vec{K})=V_{\text{CW}}^{(G)}(K_{0},R\vec{K}),\qquad R \in O(3).\]
Contributions from the quark loop.Typically, only the contribution from the heaviest quark needs to be included in the effective potential. However, we include both the top and bottom quarks in our calculation to ensure an explicit \(SU(2)_{L}\) invariance. The top and bottom quark masses mix due to the presence of charged background fields, and the fermion mass matrix given by \(-\partial^{2}\mathcal{L}/\partial\bar{\psi}_{L}^{i}\partial\psi_{R}^{j}\) is
\[(\bar{t}_{L},\bar{b}_{L})\mathbf{M}_{F}\begin{pmatrix}t_{R}\\ b_{R}\end{pmatrix},\quad\mathbf{M}_{F}=\begin{pmatrix}y_{ii}\phi_{i\downarrow}^ {*}&y_{ib}\phi_{i\uparrow}\\ -y_{it}\phi_{i\uparrow}^{*}&y_{ib}\phi_{i\downarrow}\end{pmatrix}. \tag{23}\]
We obtain the fermion masses after singular decomposition,
\[L^{-1}\mathbf{M}_{F}R=\begin{pmatrix}m_{t}\\ &m_{b}\end{pmatrix},\quad m_{t/b}^{2}=\frac{B\pm\sqrt{B^{2}+C}}{2}, \tag{24}\]
where, with the help of vector \(\vec{Y}\) defined in Eq. (12), \(B\) and \(C\) can be written as \(SO(3)_{K}\) basis invariant forms as follows:
\[\begin{split} B=&\frac{1}{2}(Y_{t0}+Y_{b0})K_{0}+\frac{1}{2}( \vec{Y}_{t}+\vec{Y}_{b})\cdot\vec{K},\\ C=&-\frac{1}{2}(Y_{t}\cdot Y_{b})K_{0}^{2}-K_{0}(Y_{t0}\vec{Y}_{b}+Y_{b 0}\vec{Y}_{t})\cdot\vec{K}\\ &+\frac{1}{2}\vec{K}\cdot(\vec{Y}_{t}\cdot\vec{Y}_{b}-Y_{t0}Y_{b0}- \vec{Y}_{t}\otimes\vec{Y}_{b}-\vec{Y}_{b}\otimes\vec{Y}_{t})\cdot\vec{K}.\end{split} \tag{25}\]
The masses can be simplified in the case that the Yukawa couplings exhibit a large hierarchy; for example, when \(y_{t}\gg y_{b}\), only the top quark mass \(m_{t}^{2}(K)=(Y_{t0}K_{0}+\vec{Y}_{t}\cdot\vec{K})/4\) needs to be considered. Equations (24) and (25) show that the symmetry of \(V_{\text{CW}}^{(F)}\) is completely determined by the direction of vector \(\vec{Y}\). When the vector \(\vec{Y}\) is invariant under the rotation, i.e., \(\vec{Y}_{t/b}=R\vec{Y}_{t/b}\) for \(R\in O(3)\),
\[V_{\text{CW}}^{(F)}(K_{0},\vec{K})=V_{\text{CW}}^{(F)}(K_{0},R\vec{K}).\]
When \(\vec{Y}_{t/b}\neq R\vec{Y}_{t/b}\),
\[V_{\text{CW}}^{(F)}(K_{0},\vec{K})\neq V_{\text{CW}}^{(F)}(K_{0},R\vec{K}).\]
Therefore, whether the fermion loop contribution to \(V_{\text{CW}}\) breaks the global symmetry of the tree-level potential depends on the pattern of Yukawa couplings.
Contributions from the scalar loop.The calculation of \(V_{\text{CW}}^{(S)}(K_{0},\vec{K})\) can be performed straightforwardly from Eq. (16), in which the mass matrix of scalars is given by
\[\mathbf{M}_{S}^{2}(\varphi)_{ab}=\frac{\delta^{2}V}{\delta\varphi_{a}\delta \varphi_{b}}, \tag{26}\]
where \(\varphi_{a}\) are real vectors in the 8-dimensional field space. Though \({\bf M}_{S}^{2}\) cannot be diagonalized analytically, we still find a way to investigate the global symmetries of \(V_{\rm CW}^{(G)}\). We firstly employ the notations in Ref. [30], where the components of \(\varphi_{a}\) are ordered as
\[\varphi_{a}^{T}=\left({\rm Re}\,\phi_{1\uparrow},{\rm Im}\,\phi_{1\uparrow},{ \rm Re}\,\phi_{2\uparrow},{\rm Im}\,\phi_{2\uparrow},{\rm Re}\,\phi_{1\downarrow },{\rm Im}\,\phi_{1\downarrow},{\rm Re}\,\phi_{2\downarrow},{\rm Im}\,\phi_{2 \downarrow}\right), \tag{27}\]
and \(\varphi_{a}\) is related to the bilinear form by \(K^{\mu}=\varphi_{a}\Sigma_{ab}^{\mu}\varphi_{b}\). The \(8\times 8\) matrices \(\Sigma^{\mu}\) are defined as
\[\Sigma^{\mu}=\Sigma_{4}^{\mu}\oplus\Sigma_{4}^{\mu},\quad\Sigma_{4}^{0}= \mathbb{1}_{4},\quad\Sigma_{4}^{1}=\begin{pmatrix}0&\mathbb{1}_{2}\\ \mathbb{1}_{2}&0\end{pmatrix},\quad\Sigma_{4}^{2}=\begin{pmatrix}0&\mathbb{1}_ {2}\\ -\mathbb{1}_{2}&0\end{pmatrix},\quad\Sigma_{4}^{3}=\begin{pmatrix}\mathbb{1}_{2 }&0\\ 0&-\mathbb{1}_{2}\end{pmatrix}, \tag{28}\]
where \(\mathbb{1}_{d}\) is the \(d\times d\) identity matrix and \(\mathbb{1}_{2}\equiv(\begin{smallmatrix}0&1\\ -1&0\end{smallmatrix})\).
The \(V_{\rm CW}^{(S)}\) can be expanded in the powers of \({\bf M}_{S}\)[31],
\[\begin{split} V_{\rm CW}^{(S)}&=\frac{1}{2}{\bf Tr}\int \frac{d^{4}p_{E}}{2\pi^{4}}\ln\left[p_{E}^{2}+{\bf M}_{S}^{2}\right]\\ &=\frac{1}{2}\int\frac{d^{4}p_{E}}{2\pi^{4}}\left[{\bf Tr}\sum_{n=1 }^{\infty}\frac{1}{n}\left(-\frac{{\bf M}_{S}^{2}}{p_{E}^{2}}\right)^{n}+\ln p _{E}^{2}\right],\end{split} \tag{29}\]
where \({\bf Tr}\) stands for taking a trace over the 8-dimensional field space. For example, the leading power is
\[{\bf Tr}({\bf M}_{S}^{2})=\left(20\eta_{00}+4\,{\rm tr}(E)\right)K_{0}+24 \vec{K}\cdot\vec{\eta}+8\xi_{0}, \tag{30}\]
which is consistent with Ref. [30]. We show that all the traces \({\bf Tr}({\bf M}_{S}^{2n})\) in Eq. (29) are functions of gauge orbits \(K^{\mu}\), and the complete calculations are deferred to Appendix B. Here, we present the final calculation result expressed in the bilinear notation as
\[V_{\rm CW}^{(S)}={\cal F}\left(S_{p}^{\mu\nu},\eta^{\mu\nu}\right). \tag{31}\]
The function \({\cal F}\) only depends on the trace of the inner products of \(S_{p}^{\mu\nu}\) and \(\eta^{\mu\nu}\), and \(S_{p}^{\mu\nu}\) is defined as
\[S_{p}^{\mu\nu}=F(p)^{\mu}K^{\nu}+F(p)^{\nu}K^{\mu}-g^{\mu\nu}(F(p)K). \tag{32}\]
Here, \(F(p)_{\mu}\) is a function of \(K^{\mu}\) that depends on the integer \(p\),
\[F(p)_{0}\equiv\sum_{k=0}^{p/2}C_{p}^{2k}(A_{0})^{p-2k}|\vec{A}|^{2k}, \tag{33}\]
\[\vec{F}(p)\equiv\left\{\begin{aligned} -\sum_{k=0}^{(p-1)/2}C_{p}^{2k+1}(A_{0})^{p-2k-1}| \vec{A}|^{2k}\vec{A}&(p\neq 0),\\ 0&(p=0),\end{aligned}\right. \tag{34}\]
where \((A_{0},\vec{A})=A_{\mu}=2\eta_{\mu\nu}K^{\nu}+\xi_{\mu}\) and \(C_{p}^{k}\) is the binomial coefficient. Notice that the global symmetries are determined only by the 3-dimension vector \(\vec{A}\).
Upon expressing \(V_{\rm CW}^{(S)}\) as a function in the orbit space, we find that the tensor structures in \(V_{\rm CW}^{(S)}\) are constructed entirely by tree level parameter tensors \(\eta^{\mu\nu}\) and \(\xi^{\mu}\), i.e., no new tensor structure appears, therefore, the rotation symmetries of \(V_{\rm CW}^{(S)}\) in the \(\vec{K}\)-space are determined by the tree-level parameter tensors. If the tree-level potential is invariant under a rotation \(R\in O(3)\) in the \(\vec{K}\)-space, i.e., \(V_{\rm tree}(K_{0},\vec{K})=V_{\rm tree}(K_{0},R\vec{K})\), the scalar loop contribution \(V_{\rm CW}^{(S)}\) also preserves the rotation invariance,
\[V_{\rm CW}^{(S)}(K_{0},\vec{K})=V_{\rm CW}^{(S)}(K_{0},R\vec{K}).\]
### Symmetries of thermal potential
As for the finite temperature corrections in Eq. (15), \(V_{T}\) stands for the contribution from one-loop diagrams, and \(V_{\rm daisy}\) denotes the correction from higher loop Daisy diagrams [31]. The one-loop correction \(V_{T}\) is given by
\[V_{T}=\sum_{i}n_{i}\frac{T^{4}}{2\pi^{2}}J_{B/F}\left(m_{i}^{2}/T^{2}\right), \tag{35}\]
where the thermal bosonic function \(J_{B}\) and fermionic function \(J_{F}\) are
\[J_{B}\left(m^{2}/T^{2}\right)= -\frac{\pi^{4}}{45}+\frac{\pi^{2}}{12}\frac{m^{2}}{T^{2}}-\frac{ \pi}{6}\left(\frac{m^{2}}{T^{2}}\right)^{3/2}-\frac{1}{32}\frac{m^{4}}{T^{4}} \log\frac{m^{2}}{a_{b}T^{2}}\] \[-2\pi^{7/2}\sum_{\ell=1}^{\infty}(-1)^{\ell}\frac{\zeta(2\ell+1) }{(\ell+1)!}\Gamma\left(\ell+\frac{1}{2}\right)\left(\frac{m^{2}}{4\pi^{2}T^ {2}}\right)^{\ell+2}, \tag{36}\] \[J_{F}\left(m^{2}/T^{2}\right)= \frac{7\pi^{4}}{360}-\frac{\pi^{2}}{24}\frac{m^{2}}{T^{2}}-\frac{ 1}{32}\frac{m^{4}}{T^{4}}\log\frac{m^{2}}{a_{f}T^{2}}\] \[-\frac{\pi^{7/2}}{4}\sum_{\ell=1}^{\infty}(-1)^{\ell}\frac{\zeta(2 \ell+1)}{(\ell+1)!}\left(1-2^{-2\ell-1}\right)\Gamma\left(\ell+\frac{1}{2} \right)\left(\frac{m^{2}}{\pi^{2}T^{2}}\right)^{\ell+2}. \tag{37}\]
Here, \(a_{b}=16a_{f}=16\pi^{2}e^{3/2-2\gamma_{E}}\), and \(\zeta\) is the Riemann-\(\zeta\) function. The leading \(T\)-dependent terms of \(J_{B/F}\) are given by the mass-square terms,
\[J_{B}=\frac{\pi^{2}}{12}\frac{m^{2}}{T^{2}}+O(T^{-4}),\quad J_{F}=-\frac{\pi^{ 2}}{24}\frac{m^{2}}{T^{2}}+O(T^{-4}), \tag{38}\]
where the background-field-independent terms are dropped. By collecting the results in Eqs. (22), (24) and (30), we obtain the leading contributions from gauge boson loops, fermion loops, and scalar loops to \(V_{T}\) as follows:
\[V_{T}^{(G)} \approx\frac{g^{2}T^{2}}{32}(3+t_{W}^{2})K_{0}, \tag{39}\] \[V_{T}^{(F)} \approx-\frac{T^{2}}{8}\left[(Y_{t0}+Y_{b0})K_{0}+(\vec{Y}_{t}+ \vec{Y}_{b})\cdot\vec{K})\right],\] (40) \[V_{T}^{(S)} \approx\frac{T^{2}}{6}\left[\left(5\eta_{00}+\text{tr}(E)\right) K_{0}+6\vec{K}\cdot\vec{\eta}+2\xi_{0}\right]. \tag{41}\]
We find that the corrections from Eqs. (39)-(41) to the tree-level potential is equivalent to shifting the quadratic couplings \(\xi_{\mu}\) in the orbit space, i.e.,
\[\xi_{0} \rightarrow\xi_{0}+T^{2}c_{T0},\] \[\vec{\xi} \rightarrow\vec{\xi}+T^{2}\vec{c}_{T}, \tag{42}\]
where
\[c_{T0} =\frac{g^{2}}{32}(3+t_{W}^{2})-\frac{Y_{t0}+Y_{b0}}{8}+\frac{5 \eta_{00}+\text{tr}(E)}{6},\] \[\vec{c}_{T} =\frac{1}{8}\left[8\vec{\eta}-\vec{Y}_{t}-\vec{Y}_{b}\right]. \tag{43}\]
The direction of \(\vec{\xi}\) is shifted by the quartic couplings \(\vec{\eta}\) and Yukawa interactions \(\vec{Y}\) from thermal corrections. At a sufficient high temperature with \(T^{2}\gg|\vec{\xi}|/|\vec{c}_{T}|\), the direction of shifted \(\vec{\xi}\) is aligned along the direction of \(\vec{c}_{T}\). As a result, the symmetries of thermal effective potential under the basis transformation and CP transformation are determined by \(\vec{c}_{T}\).
At high temperatures, the contribution from higher loop Daisy diagrams \(V_{\text{daisy}}\) is comparable with \(V_{T}\), and it is given by [31]
\[V_{\text{daisy}}=-\frac{T}{12\pi}\sum_{i=\text{bosons}}n_{i}\left[\mathcal{M} _{i}^{3}(\phi_{c},T)-m_{i}^{3}(\phi_{c})\right]. \tag{44}\]
Here, the \(\mathcal{M}_{i}(\phi_{c},T)\) are thermal corrected masses calculated from \(V_{\text{tree}}+V_{T}\), which is obtained from the tree level potential by parameter shifting \(\xi^{\mu}\rightarrow\xi^{\mu}+T^{2}c_{T}^{\mu}\). Therefore, the \(T\)-dependent terms in \(\mathcal{M}_{i}(\phi_{c},T)\) are in the form of \(T^{2}c_{T}^{\mu}\). As \(c_{T}^{0}\) plays no role in global transformations, the behavior of \(V_{\text{daisy}}\) under the \(O(3)_{K}\) transformation depends only on \(\vec{c}_{T}\).
After understanding the behavior of \(V_{\text{CW}}\), \(V_{T}\) and \(V_{\text{daisy}}\) under the \(O(3)_{K}\) transformation, we are ready to discuss whether a global symmetry preserved by the tree-level potential
will be violated by the loop corrections. Consider a tree-level potential that processes the symmetry of a basis or CP transformation, then the potential is invariant under a rotation \(R\) in the \(\vec{K}\)-space, \(V_{\rm tree}(K_{0},\vec{K})=V_{\rm tree}(K_{0},R\vec{K})\), and its parameters satisfy
\[\vec{\xi}=R\vec{\xi},\quad\vec{\eta}=R\vec{\eta},\quad E=RER^{T}. \tag{45}\]
The _only_ quantum correction that may violate the symmetry is the contribution from fermion loops. The global symmetry is maintained in effective potential if and only if all the Yukawa couplings are invariant under \(R\), i.e., \(\vec{Y}=R\vec{Y}\).
If the symmetry is softly broken at tree level, i.e., only the scalar quadratic coupling \(\vec{\xi}\neq R\vec{\xi}\) violates the symmetry while other conditions in Eq. (45) are preserved, then the symmetry violation effect from soft terms tend to be suppressed at high temperature. This is because the leading thermal corrections shift the scalar quadratic couplings \(\vec{\xi}\) with Yukawa coupling \(\vec{Y}\) and scalar quartic couplings \(\vec{\eta}\), and both \(\vec{Y}\) and \(\vec{\eta}\) preserve the symmetry.
Another noteworthy example is the custodial symmetry. In the orbit space, the custodial symmetry of the 2HDM does not correspond to a rotation symmetry but a shift symmetry [32]. As the effective potential is not invariant under any shift symmetry in the orbit space, the custodial symmetry of 2HDM is bound to be broken by the effective potential.
## IV Bilinear notation after EWSB
In this section, we extend the bilinear notation to discuss EWSB and physical fields. There are two reasons to discuss the EWSB in the bilinear notation.
Firstly, a global symmetry exhibited by the potential, as shown in Table 1, can be broken spontaneously after the potential develops a vacuum. For example, consider the CP symmetry. Even if the potential is explicitly CP conserving, \(V(\Phi_{1},\Phi_{2})|_{\Phi_{i}\rightarrow\Phi_{i}^{*}}=V(\Phi_{1},\Phi_{2})\), the physical fields, which are fluctuations around the vacuum, may still break CP symmetry after EWSB if the vacuum has an irremovable CP phase as follows:
\[\langle\Phi_{1}\rangle=\begin{pmatrix}0\\ v_{1}\end{pmatrix},\quad\langle\Phi_{2}\rangle=\begin{pmatrix}0\\ v_{2}e^{i\delta}\end{pmatrix}. \tag{46}\]
This is called spontaneous CP violation (SCPV). In the bilinear notation, the SCPV happens when the potential but not the vacuum is invariant under a mirror reflection. In this case, there are two degenerate vacuums related by a CP transformation as in Fig. 2. After
analyzing the vacuum conditions in the orbit space, we can easily determine whether the CP symmetry or other global symmetry is spontaneously broken.
Secondly, exploring the physical fields after the EWSB is necessary for performing on-shell renormalization. The renormalized effective potential can be expressed fully in the bilinear notation if we can perform the on-shell renormalization in the orbit space. For that, we examine the vacuum structures in the orbit space and investigate the relations between the field space and orbit space. Furthermore, we demonstrate that the mass matrix of the physical neutral scalars corresponds to a geometric structure in the orbit space, making it convenient to handle the mass spectrum and on-shell renormalization.
### Vacuum condition
We start with the vacuum conditions of \(V(K^{\mu})\), where \(V(K^{\mu})\) represents the tree-level or effective potential in the orbit space. Figure 3 displays the light-cone in the orbit space, and the light-cone is a hyper-surface defined by \(K_{0}=|\vec{K}|\). The orbit space inside the forward light-cone \(LC^{+}\) is the physical region, satisfying \(K_{0}\geq|\vec{K}|\)[12; 13; 14; 15]. A neutral vacuum expectation value requires the minimum of the potential, denoted as \(K_{v}^{\mu}\), to lie on the \(LC^{+}\), i.e., \(K_{v,0}=|\vec{K}_{v}|\)[12; 13; 14; 15]. Therefore, \(K_{v}^{\mu}\) is a conditional minimum of \(V(K^{\mu})\) on the \(LC^{+}\).
The vacuum of the potential \(V(K^{\mu})\) is solved by minimizing the function \(V_{u}(K^{\mu})=V(K^{\mu})-uL(K^{\mu})\), where \(u\) is a Lagrange multiplier and \(L(K^{\mu})=0\) is the light-cone condition
Figure 2: Illustration figures of tree-level parameters for an SCPV potential. Here \(\vec{K}_{v}\) and \(\vec{K}_{v^{\prime}}\) are a pair of degenerated vacuum expectation values that are related by a mirror reflection, and the reflection plane spanned by \(\vec{\xi}\) and \(\vec{\eta}\).
with \(L(K^{\mu})\) defined as
\[L(K_{0},\vec{K})=K_{0}^{2}-|\vec{K}|^{2}=4K_{+}K_{-}-|\vec{K}_{T}|^{2}. \tag{47}\]
Here, for the convenience, we introduce the light-cone coordinates
\[K_{\pm}\equiv\frac{(K_{0}\pm K_{3})}{2},\qquad\vec{K}_{\rm T}\equiv(K_{1},K_{2}), \tag{48}\]
which are defined after rotating the vacuum along the \(K_{3}\) direction, i.e., \(K_{v}^{\mu}=\frac{v^{2}}{2}(1,0,0,1)^{T}\). The solution of the conditional minimum satisfies
\[\left.\frac{\partial V}{\partial K_{-}}\right|_{K_{v}}=2v^{2}u>0,\quad\left. \frac{\partial V}{\partial K_{+}}\right|_{K_{v}}=0,\quad\left.\frac{\partial V }{\partial\vec{K}_{\rm T}}\right|_{K_{v}}=0. \tag{49}\]
Note that we require \(\frac{\partial V}{\partial K_{-}}>0\) to ensure no global minimum inside the light-cone to avoid a charged vacuum.
In addition to the conditions in Eq. (49), we need to make sure that \(K_{v}\) is a minimal point rather than a saddle point. In the 4-dimensional orbit space, \(K_{v}\) is the tangent point of \(LC^{+}\) and an equipotential surface \(\mathcal{M}_{\rm vev}\) defined by \(V(K^{\mu})=V(K_{v}^{\mu})\)[14], and the normal direction of their tangent space is \(K_{-}\), as shown in Fig. 3. Therefore, the requirement that \(K_{v}\) is not a saddle point indicates that \(\mathcal{M}_{\rm vev}\) must be outside of the \(LC^{+}\). Equivalently, the distance between \(LC^{+}\) and \(\mathcal{M}_{\rm vev}\) is non-negative. Expanding the distance between \(LC^{+}\) and \(\mathcal{M}_{\rm vev}\) at their tangent point yields
\[\delta h=(\delta K_{+},\delta\vec{K}_{\rm T})\ {\bf M}_{\rm dist}^{2}\begin{pmatrix} \delta K_{+}\\ \delta\vec{K}_{\rm T}\end{pmatrix},\quad{\bf M}_{\rm dist}^{2}=\frac{1}{ \partial V/\partial K_{-}}\begin{pmatrix}\frac{\partial^{2}V_{u}}{\partial K _{+}^{2}}&\frac{\partial^{2}V_{u}}{\partial K_{+}\partial\vec{K}_{\rm T}}\\ \frac{\partial^{2}V_{u}}{\partial K_{+}\partial\vec{K}_{\rm T}}&\frac{ \partial^{2}V_{u}}{\partial\vec{K}_{\rm T}^{2}}\end{pmatrix}. \tag{50}\]
Figure 3: Vacuum expectation value and light-cone coordinates in the orbit space. The yellow surface denotes \(LC^{+}\) and the green denotes the equipotential surface \(\mathcal{M}_{\rm vev}\). \(K_{v}\) is the tangent point of these 3-dimensional hyper-surfaces.
Therefore, the matrix \(\mathbf{M}_{\rm dist}^{2}\) must be positive definite. Here, the distance \(\delta h\) is measured in the coordinate \(K_{-}\). As to be shown later, the distance matrix \(\mathbf{M}_{\rm dist}^{2}\) between the two hyper-surfaces directly yields the neutral scalar mass matrix.
Now we have introduced the vacuum conditions fully in the orbit space. These conditions apply to both the tree-level and the effective potentials. Specifically, the tree-level potential in Eq. (9) can be written in terms of the light-cone coordinates as follows,
\[V_{\rm tree}= \xi_{+}K_{+}+\xi_{-}K_{-}+\vec{\xi}_{\rm T}\cdot\vec{K}_{\rm T}+ \left(K_{+},K_{-},\vec{K}_{\rm T}\right)\begin{pmatrix}\eta_{++}&\eta_{+-}& \vec{\eta}_{\rm T+}\\ \eta_{+-}&\eta_{--}&\vec{\eta}_{\rm T-}\\ \vec{\eta}_{\rm T+}&\vec{\eta}_{\rm T-}&\eta_{\rm TT}\end{pmatrix}\begin{pmatrix} K_{+}\\ K_{-}\\ \vec{K}_{\rm T}\end{pmatrix}. \tag{51}\]
Then the minimal conditions for the tree-level potential from Eq. (49) are
\[\left.\frac{\partial V_{\rm tree}}{\partial K_{-}}\right|_{K_{v}} =\xi_{-}+v^{2}\eta_{+-}=2v^{2}u>0,\] \[\left.\frac{\partial V_{\rm tree}}{\partial K_{+}}\right|_{K_{v}} =\xi_{+}+v^{2}\eta_{++}=0,\] \[\left.\frac{\partial V_{\rm tree}}{\partial\vec{K}_{\rm T}} \right|_{K_{v}} =\vec{\xi}_{\rm T}+v^{2}\vec{\eta}_{\Gamma+}=0, \tag{52}\]
which are equivalent to the minimal conditions given in Ref. [12].
### A geometrical view of the scalar mass matrix
After the potential develops a vacuum expectation value, the scalar fields become massive. The field components after the EWSB, which are fluctuations around the vacuum. Without loss of generality, we use the Higgs basis in which the vacuum \(v\) is rotated to the first doublet, and the field components are
\[H_{1}=\begin{pmatrix}G^{+}\\ \frac{v+\phi+iG^{0}}{\sqrt{2}}\end{pmatrix},\hskip 14.226378ptH_{2}=\begin{pmatrix} H^{+}\\ \frac{R+iI}{\sqrt{2}}\end{pmatrix}, \tag{53}\]
where \(\phi,R,I\) and \(H^{\pm}\) are physical fields while \(G_{0}\) and \(G^{\pm}\) are Goldstone fields. By substituting the field components of \(H_{i}\) into Eq. (2), and rewriting them in terms of the light-cone
coordinates, we have
\[\begin{pmatrix}K_{+}\\ K_{1}\\ K_{2}\\ K_{-}\end{pmatrix}=\frac{v^{2}}{2}\begin{pmatrix}1\\ 0\\ 0\\ 0\end{pmatrix}+v\begin{pmatrix}\phi\\ R\\ I\\ 0\end{pmatrix}+\begin{pmatrix}\frac{\phi^{2}}{2}+\frac{G_{0}^{2}}{2}+G^{+}G^{-} \\ \phi R+IG_{0}+G^{+}H^{-}+G^{-}H^{+}\\ \phi I-RG_{0}+i(G^{+}H^{-}-G^{-}H^{+})\\ \frac{I^{2}}{2}+\frac{R^{2}}{2}+H^{-}H^{+}\end{pmatrix}. \tag{54}\]
The charged Higgs boson mass is given by
\[m_{H^{\pm}}^{2}=\left.\frac{\partial V}{\partial H^{-}H^{+}}\right|_{\rm vev}= \left.\frac{\partial V}{\partial K_{-}}\right|_{K_{v}}\left.\frac{\partial K_{ -}}{\partial H^{-}H^{+}}\right|_{\rm vev}=\left.\frac{\partial V}{\partial K_{ -}}\right|_{K_{v}}. \tag{55}\]
As for the neutral physical scalars \(\phi,R\) and \(I\), their mass matrix is calculated by expanding the potential in the field space as follows,
\[\delta V=\left(\delta\phi,\delta R,\delta I\right)\,\mathbf{M}_{\rm neutral}^ {2}\,\,\begin{pmatrix}\delta\phi\\ \delta R\\ \delta I\end{pmatrix}, \tag{56}\]
where \(\delta\phi,\delta R\) and \(\delta I\) are small expansions of the fields around the vacuum. Equation (54) shows that the three directions \((K_{+},\vec{K}_{\rm T})\), which span the tangent space of \(LC^{+}\) and \(\mathcal{M}_{\rm vev}\), are linearly related to the three neutral scalar fields \((\phi,R,I)\) around the vacuum. The linear relationship between field space and orbit space directly links the scalar mass matrix and the distance matrix between \(LC^{+}\) and \(\mathcal{M}_{\rm vev}\). By combining Eq. (56) with Eqs. (50) and (54), we obtain
\[\mathbf{M}_{\rm neutral}^{2}=v^{2}\begin{pmatrix}\frac{\partial^{2}V_{\rm u}} {\partial K_{+}^{2}}&\frac{\partial^{2}V_{\rm u}}{\partial K_{+}\partial\vec {K}_{\rm T}}\\ \frac{\partial^{2}V_{\rm u}}{\partial K_{+}\partial\vec{K}_{\rm T}}&\frac{ \partial^{2}V_{\rm u}}{\partial\vec{K}_{\rm T}^{2}}\end{pmatrix}=v^{2}\frac{ \partial V}{\partial K_{-}}\mathbf{M}_{\rm dist}^{2}. \tag{57}\]
Therefore, the neutral mass matrix is simply proportional to the distance matrix between the two hyper-surfaces \(LC^{+}\) and \(\mathcal{M}_{\rm vev}\).
The experimentally preferred Higgs alignment limit can be read out from Eq.(57) directly. In the alignment limit, the neutral scalar \(\phi\) in Eq. (53) corresponds to the SM-like Higgs boson, and all of its properties are very close to the SM Higgs boson, including mass, gauge couplings, Yukawa couplings, and CP property. Technically, the alignment limit is reached when the neutral scalar \(\phi\) in Eq. (53) is approximately the 125 GeV mass eigenstate and does not mix with other neutral scalars, therefore, we obtain the following relations from Eq. (57),
\[\left.\frac{\partial^{2}V_{u}}{\partial K_{+}\partial\vec{K}_{\rm T}}\right|_{ K_{v}}=\left.\frac{\partial^{2}V}{\partial K_{+}\partial\vec{K}_{\rm T}} \right|_{K_{v}}\approx 0, \tag{58}\]
where \(K_{+}\) and \(\vec{K}_{\rm T}\) are light-cone coordinates in orbit space. At tree-level, this condition yields \(\vec{\eta}_{\rm T+}\approx 0\) straightforwards.
Another demonstration is to discuss the ultra-light CP-odd particle, which is also known as the axion-like particle (ALP). The ALP is of widespread interest for its rich phenomenology, and the 2HDM is a simple model that can provide the ALP. From the geometric relations in the orbit space, a massless scalar appears when the two hyper-surfaces \(LC^{+}\) and \({\cal M}_{\rm vev}\) osculate at \(K_{v}\) along a certain direction. There are two possibilities in the 2HDM to produce an ALP naturally, due to symmetries rather than accidental parameter choice. One possibility is the 2HDM potential with an approximately \(U(1)_{a}\) symmetry. An exact \(U(1)_{a}\) symmetry in the 2HDM potential results in an additional Goldstone boson, and the Goldstone boson will develop a small mass if the \(U(1)_{a}\) symmetry is slightly broken as shown in Fig. 4(a). In this case, the ALP is a pseudo-Goldston boson as in the Dine-Fischler-Srednicki-Zhitnitsky axion model [33; 34]. Another possibility is the 2HDM potential with a CP symmetry that is spontaneously broken. When the SCPV phase \(\delta\) is very small, the two degenerate vacuums turn to merge, and the two hyper-surfaces \(LC^{+}\) and \({\cal M}_{\rm vev}\) turn to osculate with each other at \(K_{v}\), as shown in Fig. 4(b), therefore, a massless boson appears when the SCPV phase \(\delta\) goes to zero [35]. In this case, the ALP is not a pseudo-Goldston boson.
Figure 4: A two dimensional slice of Fig. 3 with \(K_{0}=K_{v,0}\), viewed from the \(K_{0}\) direction. The symbol \(\odot\) denotes the \(K_{0}\) axis. The yellow line denotes \(LC^{+}\) and the green denotes \({\cal M}_{\rm vev}\). There are two scenarios with an ultra-light scalar: (a) potential with a slightly broken \(U(1)_{a}\) symmetry; (b) the SCPV potential with a small CP phase \(\delta\).
On-shell renormalization in the orbit space
The masses and mixing angles of physical states derived from the one-loop CW potential in the \(\overline{\text{MS}}\) renormalization scheme differ from their tree-level values. To directly use the loop-corrected masses and mixing angles as inputs, the on-shell renormalization scheme is often preferred. This is achieved by adding the counterterm potential \(V_{\text{CT}}\) to the zero temperature effective potential
\[V_{\text{eff}}=V_{\text{tree}}+V_{\text{CW}}+V_{\text{CT}}, \tag{59}\]
and then enforcing the loop-corrected vacuum and masses to be the same as the tree-level values. Consequently, the renormalization conditions in the field space are given by
\[\partial_{\varphi_{a}}(V_{\text{CT}}+V_{\text{CW}})\big{|}_{ \varphi_{a}=(\varphi_{a})_{\text{tree}}}=0, \tag{60}\] \[\partial_{\varphi_{a}}\partial_{\varphi_{b}}(V_{\text{CT}}+V_{ \text{CW}})\big{|}_{\varphi_{a}=(\varphi_{a})_{\text{tree}}}=0, \tag{61}\]
where \(\varphi_{a}\) (\(a=1\cdots 8\)) denote the eight scalar field components in the two Higgs doublets.
However, most of the renormalization conditions are redundant due to unphysical fields and quite a few identities, and it is convenient to deal with the renormalization condition in orbit space.3 To achieve this, we express the counterterm potential in the bilinear notation as \(V_{\text{CT}}=\delta\xi_{\mu}\,K^{\mu}+\delta\eta_{\mu\nu}\,K^{\mu}K^{\nu}\). Based on the vacuum conditions in Eq. (49) and the scalar masses given in Eqs. (55) and (57), we obtain ten independent renormalization conditions that are related to the physical fields as follows:
Footnote 3: A detailed analysis of the number of renormalization conditions in the field space and their equivalence with the conditions in the orbit space is presented in Appendix C.
\[0 =\partial_{K_{+}}(V_{\text{CT}}+V_{\text{CW}})\big{|}_{K_{v}}, \tag{62}\] \[0 =\partial_{\widetilde{K}_{T}}(V_{\text{CT}}+V_{\text{CW}})\big{|} _{K_{v}},\] (63) \[0 =\partial_{K_{-}}(V_{\text{CT}}+V_{\text{CW}})\big{|}_{K_{v}},\] (64) \[0 =\partial_{K_{+}}^{2}(V_{\text{CT}}+V_{\text{CW}})\big{|}_{K_{v}},\] (65) \[0 =\partial_{\widetilde{K}_{T}}^{2}(V_{\text{CT}}+V_{\text{CW}}) \big{|}_{K_{v}},\] (66) \[0 =\partial_{K_{+}}\partial_{\widetilde{K}_{T}}(V_{\text{CT}}+V_{ \text{CW}})\big{|}_{K_{v}}. \tag{67}\]
Here the light-cone coordinates are defined still by the tree-level vacuum \(K_{v}\), and the derivatives are evaluated around \(K_{v}\). Note that only part of the first and second derivatives
\(\left.\partial_{K^{\mu}}(V_{\rm CT}+V_{\rm CW})\right|_{K_{v}}\) and \(\left.\partial_{K^{\mu}K^{\nu}}(V_{\rm CT}+V_{\rm CW})\right|_{K_{v}}\) are related to the vacuum conditions and scalar masses and should be included in the renormalization conditions, while the others are irrelevant to physical quantities. Specifically, four conditions from the first derivative in Eqs. (62)-(64) ensure that the loop-corrected vacuum expectation value is the same as the tree-level case, and Eq. (64) also ensures that the charged scalar mass is the same as the tree-level value. The other six conditions involving the second derivatives in Eqs. (65)-(67) ensure that the neutral scalar masses and mixing angles are the same as those of the tree-level potential.
The counterterms \(\delta\xi_{\mu}\) and \(\delta\eta_{\mu\nu}\) can be determined from the renormalization conditions in Eqs. (62)-(67). For a general 2HDM without any constrains on the parameters, there are fourteen free parameters, four in \(\delta\xi_{\mu}\) and ten in \(\delta\eta_{\mu\nu}\), to be determined by the renormalization conditions. After expressing \(\delta\xi_{\mu}\) and \(\delta\eta_{\mu\nu}\) in terms of the light-cone coordinates, the renormalization conditions are
\[\delta\eta_{++} =-\partial_{K_{+}}^{2}V_{\rm CW}\big{|}_{K_{v}}, \tag{68}\] \[\delta\vec{\eta}_{T+} =-\partial_{\vec{K}_{T}}\partial_{\vec{K}_{+}}V_{\rm CW}\big{|}_ {K_{v}},\] (69) \[\delta\eta_{TT} =-\partial_{\vec{K}_{T}}^{2}V_{\rm CW}\big{|}_{K_{v}},\] (70) \[\delta\xi_{+} =-\partial_{K_{+}}V_{\rm CW}\big{|}_{K_{v}}-v^{2}\delta\eta_{++},\] (71) \[\delta\vec{\xi}_{T} =-\partial_{\vec{K}_{T}}V_{\rm CW}\big{|}_{K_{v}}-v^{2}\delta \vec{\eta}_{T+},\] (72) \[\delta\xi_{-} =-\partial_{K_{-}}V_{\rm CW}\big{|}_{K_{v}}-v^{2}\delta\eta_{+-}. \tag{73}\]
Note that neither the vacuum condition nor the scalar mass matrix depends on the counterterms \(\delta\eta_{--}\), \(\delta\eta_{+-}\) and \(\delta\vec{\eta}_{T-}\), therefore, these four parameters are up to free choices.
In addition, our convention is to set the tadpole terms to zero whenever possible. Generally, one can allow the development of vacuum in the field space and introduce the tadpole terms in \(V_{\rm CT}\) as done in Refs. [18; 36]. However, for the most general 2HDM potential, there will be more parameters than renormalization conditions and we can always set the tadpole terms to zero. Tadpole terms may be necessary if we require the counterterms to satisfy some specific constraints such that the remaining parameters cannot satisfy the renormalization conditions.
For the 2HDM with some specific parameter constraints required by symmetries or alignment, it is a common practice to demand the counterterms \(\delta\xi_{\mu}\) and \(\delta\eta_{\mu\nu}\) satisfying the same
constraints as the tree-level parameters \(\xi_{\mu}\) and \(\eta_{\mu\nu}\). Then the number of parameters in \(\delta\xi_{\mu}\) and \(\delta\eta_{\mu\nu}\) is less than fourteen as in the general 2HDM, and the renormalization conditions need to be dealt with case-by-case. For illustration, we discuss the renormalization conditions used in three 2HDMs below.
Softly broken \(Z_{2}\) symmetric potential.Imposing a softly broken \(Z_{2}\) symmetry on the 2HDM Lagrangian is the most popular way to prevent flavor-changing neutral interactions. For a complex 2HDM with softly broken \(Z_{2}\) symmetry, the \(Z_{2}\) symmetry gives four additional constraints on \(\delta\eta_{\mu\nu}\), and the remaining six counterterms can be fixed by the six conditions in Eqs. (68)-(70). The soft quadratic couplings are not constrained, and four parameters in \(\delta\xi^{\mu}\) can be fixed by the four conditions in Eqs. (71)-(73).
Real 2HDM with softly broken \(Z_{2}\) symmetry.In addition to the softly broken \(Z_{2}\) symmetry, a CP symmetry is often imposed on the potential. The tree-level potential is invariant under a mirror reflection \(\bar{R}\) in the orbit space, \(V_{\rm tree}(K_{0},\vec{K})=V_{\rm tree}(K_{0},\bar{R}\vec{K})\). Note that the CP symmetry does not impose any additional constraint on the quartic counterterms \(\delta\eta^{\mu\nu}\), as the \(Z_{2}\) symmetry provides stronger constraints than the CP symmetry. On the other hand, the softly broken terms are constrained by the CP symmetry. Say that the mirror reflection is along the second direction \(\bar{R}:K_{2}\rightarrow-K_{2}\), then \(\delta\xi_{2}\) should be set to zero, leaving three free parameters in \(\delta\xi^{\mu}\).
Usually, the three parameters in \(\delta\xi^{\mu}\) are not enough to satisfy the four equations in Eqs. (71)-(73). But when the vacuum is invariant under the CP transformation, e.g., \(K_{v}^{\mu}=\frac{v^{2}}{2}(1,0,0,1)\) and \(\vec{K}_{v}=\bar{R}\vec{K}_{v}\), there are only three independent conditions in Eqs. (71)-(73), because the CW potential satisfies the CP symmetry, \(V_{\rm CW}(K_{0},\vec{K})=V_{\rm CW}(K_{0},\bar{R}\vec{K})\), and we have
\[\partial_{K_{2}}V_{\rm CW}\big{|}_{K_{v}}=0,\quad\partial_{K_{2}} \partial_{K^{\mu}}V_{\rm CW}\big{|}_{K_{v}}=0. \tag{74}\]
Then one renormalization condition \(\delta\xi_{2}=-\partial_{K_{2}}V_{\rm CW}-v^{2}\delta\vec{\eta}_{2+}=0\) automatically holds from Eq. (72), and we end up with three parameters and three conditions.
However, if the vacuum develops an SCPV phase \(\delta\), the CP symmetry is broken spontaneously. The vacuum \(\vec{K}_{v}\) is no longer invariant under the CP transformation, e.g., \(K_{v}^{\mu}=\frac{v^{2}}{2}(1,0,\sin\delta,\cos\delta)\) and \(\vec{K}_{v}\neq\bar{R}\vec{K}_{v}\). As a result, Eqs. (74) no longer hold. The rest three parameters in \(\delta\xi^{\mu}\) are not enough to satisfy the renormalization conditions if we still require the counterterm \(\delta\xi_{2}=0\). The remaining renormalization condition, which is
equivalent to \(\partial_{\delta}(V_{\rm CW}+V_{\rm CT})=0\), cannot be fulfilled, and this corresponds to a change of the SCPV phase \(\delta\). It could be fixed with a tadpole counterterm of the CP-violating vacuum.
Exact aligned 2HDM.In the 2HDM, the exact alignment condition requires that the neutral scalar \(\phi\) in Eq. (53) is the 125 GeV mass eigenstate, then the tree-level parameters satisfy \(\vec{\eta}_{T+}=0\) as shown in Eq. (58). However, the alignment condition is not protected by any symmetry, and there is no guarantee that the counterterms \(\delta\vec{\eta}_{T+}=-\partial_{\vec{K}T}\partial_{\vec{K}_{+}}V_{\rm CW}\) vanish. Therefore, the alignment condition is usually broken by quantum corrections.
## VI Conclusion and discussion
We performed a complete analysis of the CP and basis transformation symmetries of the 2HDM in the orbit space. We extended the study of the global symmetries in orbit space to one-loop thermal effective potential. We demonstrated that the global symmetries of the tree-level potential are preserved by quantum corrections from boson loop contributions, but may be broken by fermion loop contributions, depending on the Yukawa interactions.
In order to study the vacuum conditions and physical masses in the orbit space, we introduced the light-cone coordinates and generalized the bilinear notation to study the physical scalar fields around the vacuum. It provides a geometric view of the scalar mass matrix and on-shell renormalization conditions. By translating the on-shell renormalization conditions of the vacuum and scalar mass into geometric conditions in the orbit space, we calculated the renormalized one-loop effective potential completely.
We extend our study to the case after the EWSB. The geometrical view of scalar masses can provide insight into special limits of the 2HDM mass spectrum, such as alignment limit and ultra-light scalars, thereby simplifying the analysis. The renormalization conditions are much simpler to be dealt with in the orbit space, and there are at most 10 independent on-shell renormalization conditions for a general 2HDM potential. Our work provides a foundation for future study of the 2HDM effective potential and its implications in orbit space.
###### Acknowledgements.
The work is supported in part by the National Science Foundation of China under Grants No. 11725520, No. 11675002 and No. 12235001.
## Appendix A Basis invariant notations of 2HDM potential
### Explicit expression of bilinear notation
The explicit expression for each component of \(K^{\mu}\) is
\[K^{\mu}=\Phi_{i}^{\dagger}\sigma_{ij}^{\mu}\Phi_{j}=\left(\begin{array}{c} \Phi_{1}^{\dagger}\Phi_{1}+\Phi_{2}^{\dagger}\Phi_{2}\\ \Phi_{1}^{\dagger}\Phi_{2}+\Phi_{2}^{\dagger}\Phi_{1}\\ i(\Phi_{2}^{\dagger}\Phi_{1}-\Phi_{1}^{\dagger}\Phi_{2})\\ \Phi_{1}^{\dagger}\Phi_{1}-\Phi_{2}^{\dagger}\Phi_{2}\end{array}\right). \tag{10}\]
By comparing the potential in the bilinear notation (Eq. 9) with the traditional notation (Eq. 1), we can explicitly relate these two sets of parameters,
\[\xi_{0} \equiv \frac{1}{2}(m_{11}^{2}+m_{22}^{2}),\qquad\eta_{00}=(\lambda_{1}+ \lambda_{2}+2\lambda_{3})/8,\] \[\vec{\xi} = \left(-\Re\left(m_{12}^{2}\right),\Im\left(m_{12}^{2}\right), \frac{1}{2}(m_{11}^{2}-m_{22}^{2})\right)^{T},\] \[\vec{\eta} = \left(\Re(\lambda_{6}+\lambda_{7})/4,-\Im(\lambda_{6}+\lambda_{7 })/4,(\lambda_{1}-\lambda_{2})/8\right)^{T},\] \[E = \frac{1}{4}\left(\begin{array}{ccc}\lambda_{4}+\Re\left( \lambda_{5}\right)&-\Im\left(\lambda_{5}\right)&\Re\left(\lambda_{6}-\lambda_{ 7}\right)\\ -\Im\left(\lambda_{5}\right)&\lambda_{4}-\Re\left(\lambda_{5}\right)&\Im\left( \lambda_{7}-\lambda_{6}\right)\\ \Re\left(\lambda_{6}-\lambda_{7}\right)&\Im\left(\lambda_{7}-\lambda_{6}\right) &\left(\lambda_{1}+\lambda_{2}-2\lambda_{3}\right)/2\end{array}\right). \tag{11}\]
In the 4-dimensional orbit space, the physical region is confined to the interior of the forward light-cone, i.e., \(K_{0}\geqslant|\vec{K}|\). Because \(K^{\mu}\) can be decomposed from \(\underline{K}_{ij}=\Phi_{i}^{\dagger}\Phi_{j}\), by definition:
\[\underline{K}\equiv\begin{pmatrix}\Phi_{1}^{\dagger}\Phi_{1}&\Phi_{2}^{ \dagger}\Phi_{1}\\ \Phi_{1}^{\dagger}\Phi_{2}&\Phi_{2}^{\dagger}\Phi_{2}\end{pmatrix}\equiv\frac {1}{2}\begin{pmatrix}K_{0}+K_{3}&K_{1}-iK_{2}\\ K_{1}+iK_{2}&K_{0}-K_{3}\end{pmatrix}, \tag{12}\]
and the matrix \(\underline{K}\) is actually a semi-positive matrix when \(\Phi_{i}=(\phi_{i\uparrow},\phi_{i\downarrow})^{T}\) are \(SU(2)_{L}\) doublets,
\[\underline{K}=\underline{\phi}\underline{\phi}^{\dagger},\quad\underline{\phi }=\begin{pmatrix}\phi_{1\uparrow}&\phi_{1\downarrow}\\ \phi_{2\downarrow}&\phi_{2\downarrow}\end{pmatrix}, \tag{13}\]
which directly leads to
\[\left\{\begin{aligned} \operatorname{tr}\underline{K}& =K_{0}\geqslant 0,\\ \det\underline{K}&=(K_{0}^{2}-|\vec{K}|^{2})/4 \geqslant 0.\end{aligned}\right. \tag{10}\]
Therefore in the bilinear notation, the tree-level 2HDM scalar potential is a real quadratic function of \((K_{0},\vec{K})\), and the physical region is defined inside the forward light-cone.
### \(Z_{2}\) symmetry in the bilinear notation
The \(Z_{2}\) symmetry is imposed on the 2HDM by assigning \(Z_{2}\) charges to scalar and fermion fields. In Eq. (1), the two Higgs doublets \(\Phi_{1}\) and \(\Phi_{2}\) carry the \(Z_{2}\) charges of \(-1\) and \(+1\) respectively, forbidding the \((\Phi_{1}^{\dagger}\Phi_{1})(\Phi_{1}^{\dagger}\Phi_{2})\) and \((\Phi_{2}^{\dagger}\Phi_{2})(\Phi_{1}^{\dagger}\Phi_{2})\) terms in the potential.
As for the Yukawa interactions, fermions are also assigned with negative or positive \(Z_{2}\) charges, then forced to interact with only \(\Phi_{1}\) or \(\Phi_{2}\). Usually, the patterns of \(Z_{2}\) charges assignments are divided into four types [37; 38; 39; 40; 41; 42]: Type I, Type II, Type X and Type Y, as listed in Table 2. For fermions with different \(Z_{2}\) charges, the vectors \(\vec{Y}\)'s projected by their Yukawa couplings are opposite to each other. For example, in the orbit space of \(Z_{2}\) eigenbasis \((\Phi_{1},\Phi_{2})\), the Yukawa coupling of fermion with positive \(Z_{2}\) charge yield \(Y^{\mu}\propto(1,0,0,-1)\) and the Yukawa coupling of fermion with negative \(Z_{2}\) charge yield \(Y^{\mu}\propto(1,0,0,1)\).
### Tensor notation
For the completeness of this paper, here we reviewed another basis invariant notation to analyze the 2HDM potential, the tensor notation [2; 3; 4]. It is straightforward to express the
2HDM scalar potential in an \(U(2)_{\Phi}\) basis invariant form,
\[V=\mu_{ij}\Phi_{i}^{\dagger}\Phi_{j}+\lambda_{ij,kl}(\Phi_{i}^{\dagger}\Phi_{j})( \Phi_{k}^{\dagger}\Phi_{l}). \tag{10}\]
As a result \(\mu_{ij}\) and \(\lambda_{ij,kl}\) transform covariantly with \(\Phi_{i}\) under the \(U(2)_{\Phi}\) basis transformation,
\[\mu_{ij}^{\prime}=U_{ik}\mu_{kl}U_{jl}^{*},\quad\lambda_{ij,kl}^{\prime}=U_{ip} U_{kr}\lambda_{pq,rs}U_{jq}^{*}U_{ls}^{*}. \tag{11}\]
By definition, \(\lambda_{ij,kl}=\lambda_{kl,ij}\), and hermiticity requires that \(\mu_{ij}=\mu_{ji}^{*},\ \lambda_{kl,ij}=\lambda_{lk,ji}^{*}\). Under the basis of Eq. (1), we have the following relations explicitly,
\[\begin{split}\mu_{11}=m_{11}^{2},&\quad\mu_{22}=m_{ 22}^{2},\\ \mu_{12}=-m_{12}^{2},&\quad\mu_{21}=-{m_{12}^{2}}^{* }\\ \lambda_{11,11}=\lambda_{1},&\quad\lambda_{22,22}= \lambda_{2},\\ \lambda_{11,22}=\lambda_{22,11}=\lambda_{3},&\quad \lambda_{12,21}=\lambda_{21,12}=\lambda_{4},\\ \lambda_{12,12}=\lambda_{5},&\quad\lambda_{21,21}= \lambda_{5}^{*},\\ \lambda_{11,12}=\lambda_{12,11}=\lambda_{6},&\quad \lambda_{11,21}=\lambda_{21,11}=\lambda_{6}^{*},\\ \lambda_{22,12}=\lambda_{12,22}=\lambda_{7},&\quad \lambda_{22,21}=\lambda_{21,22}=\lambda_{7}^{*}.\end{split} \tag{12}\]
The potential is invariant under the GCP symmetry Eq. (4) when \(\mu_{ij}\) and \(\lambda_{ij,kl}\) satisfy
\[\mu_{ij}=X_{ik}\mu_{kl}^{*}X_{lj}^{*},\quad\lambda_{ij,kl}=X_{im}X_{kn}\lambda _{mp,nq}^{*}X_{jp}^{*}X_{lq}^{*}. \tag{13}\]
One can construct several CP invariants to determine whether a potential is GCP invariant [2]. Similar to the Jarlskog invariant [43], a \(SU(3)_{L/R}\) invariant in quark family space, the CP invariants of 2HDM scalar potential are constructed from tensor products of \(\mu_{ij}\) and \(\lambda_{ij,kl}\) as \(U(2)_{\Phi}\) invariants in scalar family space. And tensor notation can also be used to construct CP invariants for scalar fermion interaction after extending tensor structures to fermion family space [2]. In addition, a recent development in tensor notation is using the Hilbert series to systematically construct all possible CP invariants [5], and similar procedures can also be used to construct CP invariant in the lepton sector with Majorana terms [44].
## Appendix B Effective potential from Scalar Loop Contribution
Here we show the calculation of the effective potential from scalar loop contribution in detail. We employ the notations in Ref. [30] to link the eight scalar fields \(\varphi_{i}\) with the bilinear
forms \(K^{\mu}\),
\[\mathcal{L}=\Omega_{\mu}\left(\partial_{\alpha}\Phi_{\imath}\right)^ {\dagger}\sigma^{\mu}_{ij}\left(\partial^{\alpha}\Phi_{j}\right)-V,\quad\Omega^{2 }=1,\] \[V_{\rm tree}=\xi_{\mu}K^{\mu}+\eta_{\mu\nu}K^{\mu}K^{\nu},\] \[\varphi_{a}=\left(\mathop{\rm Re}\phi_{1,\uparrow},\mathop{\rm Im }\phi_{1,\uparrow},\mathop{\rm Re}\phi_{2,\uparrow},\mathop{\rm Im}\phi_{2, \uparrow},\mathop{\rm Re}\phi_{1,\downarrow},\mathop{\rm Im}\phi_{1, \downarrow},\mathop{\rm Re}\phi_{2,\downarrow},\mathop{\rm Im}\phi_{2, \downarrow}\right),\] (B1) \[K^{\mu}=\varphi_{a}\Sigma^{\mu}_{ab}\varphi_{b},\] \[(\Omega_{\rho}\Sigma^{\rho})^{-1}=\Omega_{\rho}\bar{\Sigma}^{\rho}.\]
Note that \(\Omega_{\mu}=(1,0,0,0)\) for the canonical kinetic term The matrix \(\bar{\Sigma}^{\mu}=(\Sigma^{0},-\Sigma^{i})\) and the \(8\times 8\) symmetric matrices \(\Sigma^{\mu}\) defined in Eq. (28) are
\[\Sigma^{\mu}=\Sigma^{\mu}_{4}\oplus\Sigma^{\mu}_{4},\quad\Sigma^{0}_{4}= \mathbb{1}_{4},\quad\Sigma^{1}_{4}=\begin{pmatrix}0&\mathbb{1}_{2}\\ \mathbb{1}_{2}&0\end{pmatrix},\quad\Sigma^{2}_{4}=\begin{pmatrix}0&\mathbb{1}_ {2}\\ -\mathbb{1}_{2}&0\end{pmatrix},\quad\Sigma^{3}_{4}=\begin{pmatrix}\mathbb{1}_{2 }&0\\ 0&-\mathbb{1}_{2}\end{pmatrix},\] (B2)
where \(\mathbb{1}_{d}\) is the \(d\times d\) identity matrix and \(\mathbb{1}_{2}\equiv\left(\begin{smallmatrix}0&1\\ -1&0\end{smallmatrix}\right)\). Because \((\mathbb{1}_{2})^{2}=-\mathbb{1}_{2}\), the matrix \(\Sigma^{\mu}\) share the same algebra with the pauli matrix \(\sigma^{\mu}\), e.g.,
\[[\Sigma^{i},\Sigma^{j}]=2\mathbb{1}_{8}\epsilon^{ijk}\Sigma^{k}, \quad(\vec{w}\cdot\vec{\Sigma})^{2}=|\vec{w}|^{2}\mathbb{1}_{8},\] (B3) \[\frac{1}{2}(\bar{\Sigma}^{\mu}\Sigma^{\nu}+\bar{\Sigma}^{\nu} \Sigma^{\mu})=g^{\mu\nu}\mathbb{1}_{8},\] (B4) \[\Sigma^{\mu}\bar{\Sigma}^{\rho}\Sigma^{\nu}=g^{\mu\rho}\Sigma^{ \nu}+g^{\rho\nu}\Sigma^{\mu}-g^{\mu\nu}\Sigma^{\rho}+\mathbb{1}_{8}\epsilon^ {\mu\rho\nu}_{\ \ \ \lambda}\Sigma^{\lambda},\] (B5) \[\bar{\Sigma}^{\mu}\Sigma^{\rho}\bar{\Sigma}^{\nu}=g^{\mu\rho}\bar {\Sigma}^{\nu}+g^{\rho\nu}\bar{\Sigma}^{\mu}-g^{\mu\nu}\bar{\Sigma}^{\rho}- \mathbb{1}_{8}\epsilon^{\mu\rho\nu}_{\ \ \ \lambda}\bar{\Sigma}^{\lambda}.\] (B6)
Here \(\mathbb{1}_{8}\equiv\mathbb{1}_{4}\otimes\mathbb{1}_{2}\) is an anti-symmetric matrix who commutes with \(\Sigma^{\mu}\) and satisfies \((\mathbb{1}_{8})^{2}=-\mathbb{1}_{8}\), and \(\vec{w}\) is an arbitrary vector. These identities help to translate some expressions of \(\varphi_{a}\) to bilinear forms. For example,
\[\varphi\mathbb{1}_{8}\Sigma^{\mu}\varphi=0,\quad\varphi\Sigma^{\mu}\bar{ \Sigma}^{\rho}\Sigma^{\nu}\varphi=g^{\mu\rho}K^{\nu}+g^{\rho\nu}K^{\mu}-g^{\mu \nu}K^{\rho}.\] (B7)
Then we evaluate the second derivative of \(\mathcal{L}\)
\[-\frac{\delta^{2}\mathcal{L}}{\delta\varphi_{a}\delta\varphi_{b}}=\Omega_{\rho }\Sigma^{\rho}_{ab}\partial^{2}+\xi_{\mu}\Sigma^{\mu}_{ab}+2\eta_{\mu\nu}( \varphi_{c}\Sigma^{\mu}_{cd}\varphi_{d})\Sigma^{\nu}_{ab}+4\eta_{\mu\nu} \Sigma^{\mu}_{ac}(\varphi_{c}\varphi_{d})\Sigma^{\nu}_{db}.\] (B8)
In the following, we work in the frame with the canonical kinetic term with \(\Omega_{\mu}=(1,0,0,0)\), and the scalar mass matrix is
\[\mathbf{M}^{2}_{S}(\varphi)_{ab}=A_{ab}+B_{ab},\] \[A_{ab}=A_{\mu}\Sigma^{\mu}_{ab},\quad\,A_{\mu}=2\eta_{\mu\nu}K^{ \nu}+\xi_{\mu},\] (B9) \[B_{ab}=4\eta_{\mu\nu}\Sigma^{\mu}_{ac}\varphi_{c}\varphi_{d} \Sigma^{\nu}_{db}.\]
To deal with \(\mathbf{Tr}(\mathbf{M}_{S}^{2n})\) in Eq. (29), we expand the binomial
\[\mathbf{Tr}[(A_{ab}+B_{ab})^{n}]=\sum_{l=0}^{n}\sum_{\{p_{i}\}}^{\sum_{p_{i}=n-l} }N_{s}(\{p_{i}\})\mathbf{Tr}(A^{p_{1}}BA^{p_{2}}B\cdots A^{p_{l}}B). \tag{10}\]
And we need to evaluate \((A_{\mu}\Sigma^{\mu})^{p}\). Using the identities in Eq. (11),
\[(A_{\mu}\Sigma^{\mu})^{p} =(A_{0}\mathbb{1}_{8}+\vec{A}\cdot\vec{\Sigma})^{p},\] \[=\sum_{k=0}^{p}C_{p}^{k}(A_{0})^{p-k}(\vec{A}\cdot\vec{\Sigma})^{k},\] \[=\sum_{k=0}^{p/2}C_{p}^{2k}(A_{0})^{p-2k}|\vec{A}|^{2k}\mathbb{1} _{8}+\sum_{k=0}^{(p-1)/2}C_{p}^{2k+1}(A_{0})^{p-2k-1}|\vec{A}|^{2k}(\vec{A} \cdot\vec{\Sigma}), \tag{11}\]
where \(C_{p}^{k}\) is the binomial coefficient and
\[A_{0} =2\eta_{00}K_{0}+2\vec{\eta}\cdot\vec{K}+\xi_{0}, \tag{12}\] \[\vec{A} =2K_{0}\vec{\eta}+2E\vec{K}+\vec{\xi}. \tag{13}\]
For simplicity, we define a new four-vector \(F(p)_{\mu}\) from \((A_{0},\vec{A})\)
\[F(p)_{0} \equiv\sum_{k=0}^{p/2}C_{p}^{2k}(A_{0})^{p-2k}|\vec{A}|^{2k}, \tag{14}\] \[\vec{F}(p) \equiv\begin{cases}-\sum_{k=0}^{(p-1)/2}C_{p}^{2k+1}(A_{0})^{p-2k- 1}|\vec{A}|^{2k}\vec{A}&(p\neq 0),\\ 0&(p=0),\end{cases} \tag{15}\]
and we have
\[(A_{\mu}\Sigma^{\mu})^{p}=F(p)_{\mu}\bar{\Sigma}^{\mu}. \tag{16}\]
The series in Eq. (10) are then calculated as
\[\mathbf{Tr}(A^{p_{1}}BA^{p_{2}}B\cdots A^{p_{l}}B) =4^{l}\eta_{\mu_{1}\nu_{1}}\cdots\eta_{\mu_{l}\nu_{l}}\prod_{i=1} ^{l}A(p_{i})_{\rho_{i}}\varphi\Sigma^{\nu_{i}}\bar{\Sigma}^{\rho_{i}}\Sigma^{ \mu_{i+1}}\varphi\] \[=4^{l}\eta_{\mu_{1}\nu_{1}}\cdots\eta_{\mu_{l}\nu_{l}}\prod_{i=1} ^{l}S_{p_{i}}^{\nu_{i}\mu_{i+1}}\] \[=4^{l}\mathbf{tr}\left(\eta\cdot S_{p_{1}}\cdots\eta\cdot S_{p_{l} }\right),\]
where \(\mu_{l+1}\equiv\mu_{1}\) and the trace \(\mathbf{tr}\) is taken in the orbit space. The symmetric tensor \(S_{p}^{\mu\nu}=F(p)_{\rho}\varphi\Sigma^{\mu}\bar{\Sigma}^{\rho}\Sigma^{\nu}\varphi =F(p)^{\mu}K^{\nu}+F(p)^{\nu}K^{\mu}-g^{\mu\nu}(F(p)K)\). And the effective potential can be
expressed as 4
Footnote 4: For simplicity, the \(\ln p_{E}\) is dropped here.
\[V_{\rm CW}^{(S)} = \frac{1}{2}\int\frac{d^{4}p_{E}}{2\pi^{4}}\left[{\bf Tr}\sum_{n=1}^ {\infty}\frac{1}{n}\left(-\frac{{\bf M}_{S}^{2}}{p_{E}^{2}}\right)^{n}\right] \tag{17}\] \[= \frac{1}{2}\int\frac{d^{4}p_{E}}{2\pi^{4}}\sum_{n}(-)^{n}\frac{1} {n(p_{E}^{2})^{n}}\sum_{l=0}^{n}\sum_{\{p_{i}\}}^{\sum p_{i}=n-l}N_{s}(\{p_{i} \}){\bf Tr}(A^{p_{1}}BA^{p_{2}}B\cdots A^{p_{l}}B)\] \[= \frac{1}{2}\int\frac{d^{4}p_{E}}{2\pi^{4}}\sum_{n}(-)^{n}\frac{1} {n(p_{E}^{2})^{n}}\sum_{l=0}^{n}\sum_{\{p_{i}\}}^{\sum p_{i}=n-l}N_{s}(\{p_{i} \}){\bf tr}\left(\eta\cdot S_{p_{1}}\cdots\eta\cdot S_{p_{l}}\right)\]
In the end, the \(V_{\rm CW}^{(S)}\) is expressed as a series defined in the orbit space. It is worth mentioning that the discussion of CP property is independent of regularization. When the potential is CP-even, as we discussed, we can apply the CP transformation before and after the regularization and nothing will change. Finally, we can conclude that the CP property of the (CP conserving) potential tree-level potential is not violated by the Coleman-Weinberg potential from scalar loop contribution.
## Appendix C Renormalization Conditions
To compare with the renormalization conditions in Ref. [18], we follow their notations and the field expanded around the vacuum \(v_{1},v_{2}\) are
\[\Phi_{1}=\frac{1}{\sqrt{2}}\left(\begin{array}{c}\rho_{1}+{\rm i}\eta_{1}\\ v_{1}+\zeta_{1}+{\rm i}\psi_{1}\end{array}\right),\quad\Phi_{2}=\frac{1}{ \sqrt{2}}\left(\begin{array}{c}\rho_{2}+{\rm i}\eta_{2}\\ v_{2}+\zeta_{2}+{\rm i}\psi_{2}\end{array}\right). \tag{18}\]
The renormalization conditions are
\[\left.\partial_{\varphi_{a}}(V_{\rm CT}+V_{\rm CW})\right|_{ \varphi_{a}=\langle\varphi_{a}\rangle_{\rm tree}}=0, \tag{19}\] \[\left.\partial_{\varphi_{a}}\partial_{\varphi_{b}}(V_{\rm CT}+V _{\rm CW})\right|_{\varphi_{a}=\langle\varphi_{a}\rangle_{\rm tree}}=0.\] (20) \[\varphi_{a}\equiv\left\{\rho_{1},\eta_{1},\zeta_{1},\psi_{1}, \rho_{2},\eta_{2},\zeta_{2},\psi_{2}\right\},\ \langle\varphi_{a}\rangle_{\rm tree}=\left\{0,0,v_{1},0,0,0,v_{2},0\right\}.\]
Naively, there are 8+36 renormalization conditions from Eqs. (19) and (20). However, for any function of the form \(f(\Phi_{i}^{\dagger}\Phi_{j})\), its first and second derivative satisfy some identities so that most of the renormalization conditions are redundant.
We have the following 5 identities for the first derivatives,
\[\partial_{\rho_{1}} =0, \tag{104}\] \[\partial_{\rho_{2}} =0,\] (105) \[\partial_{\eta_{1}} =0,\] (106) \[\partial_{\eta_{2}} =0,\] (107) \[c_{\beta}\partial_{\psi_{1}}+s_{\beta}\partial_{\psi_{2}}=0, \tag{108}\]
where \(\partial_{\phi_{i}}=0\) denotes \(\partial_{\phi_{i}}f|_{\phi=\langle\phi\rangle_{\rm{tree}}}=0\) for any function \(f(\Phi_{i}^{\dagger}\Phi_{j})\) and \(\tan\beta=v_{2}/v_{1}\). Therefore, we are left with 3 independent renormalization conditions from Eq. (102),
\[\partial_{\zeta_{1}}(V_{\rm{CT}}+V_{\rm{CW}})=0, \tag{109}\] \[\partial_{\zeta_{2}}(V_{\rm{CT}}+V_{\rm{CW}})=0,\] (110) \[(c_{\beta}\partial_{\psi_{2}}-s_{\beta}\partial_{\psi_{1}})(V_{ \rm{CT}}+V_{\rm{CW}})=0. \tag{111}\]
We have the following 26 identities for the second derivatives,
\[(c_{\beta}\partial_{\rho_{1}}+s_{\beta}\partial_{\rho_{2}})(c_{ \beta}\partial_{\eta_{1}}+s_{\beta}\partial_{\eta_{2}})=0, \tag{112}\] \[(c_{\beta}\partial_{\rho_{1}}+s_{\beta}\partial_{\rho_{2}})(c_{ \beta}\partial_{\zeta_{1}}+s_{\beta}\partial_{\zeta_{2}})=0,\] (113) \[(c_{\beta}\partial_{\rho_{1}}+s_{\beta}\partial_{\rho_{2}})(c_{ \beta}\partial_{\psi_{1}}+s_{\beta}\partial_{\psi_{2}})=0,\] (114) \[(c_{\beta}\partial_{\rho_{1}}+s_{\beta}\partial_{\rho_{2}})(c_{ \beta}\partial_{\zeta_{2}}-s_{\beta}\partial_{\zeta_{1}})=0,\] (115) \[(c_{\beta}\partial_{\rho_{1}}+s_{\beta}\partial_{\rho_{2}})(c_{ \beta}\partial_{\psi_{2}}-s_{\beta}\partial_{\psi_{1}})=0,\] (116) \[(c_{\beta}\partial_{\eta_{1}}+s_{\beta}\partial_{\eta_{2}})(c_{ \beta}\partial_{\zeta_{1}}+s_{\beta}\partial_{\zeta_{2}})=0,\] (117) \[(c_{\beta}\partial_{\eta_{1}}+s_{\beta}\partial_{\eta_{2}})(c_{ \beta}\partial_{\psi_{1}}+s_{\beta}\partial_{\psi_{2}})=0,\] (118) \[(c_{\beta}\partial_{\eta_{1}}+s_{\beta}\partial_{\eta_{2}})(c_{ \beta}\partial_{\zeta_{2}}-s_{\beta}\partial_{\zeta_{1}})=0,\] (119) \[(c_{\beta}\partial_{\eta_{1}}+s_{\beta}\partial_{\eta_{2}})(c_{ \beta}\partial_{\psi_{2}}-s_{\beta}\partial_{\psi_{1}})=0,\] (120) \[(c_{\beta}\partial_{\zeta_{1}}+s_{\beta}\partial_{\zeta_{2}})(c_{ \beta}\partial_{\psi_{1}}+s_{\beta}\partial_{\psi_{2}})=0,\] (121) \[(c_{\beta}\partial_{\zeta_{1}}+s_{\beta}\partial_{\zeta_{2}})(c_{ \beta}\partial_{\rho_{2}}-s_{\beta}\partial_{\rho_{1}})=0,\] (122) \[(c_{\beta}\partial_{\zeta_{1}}+s_{\beta}\partial_{\zeta_{2}})(c_{ \beta}\partial_{\eta_{2}}-s_{\beta}\partial_{\eta_{1}})=0,\] (123) \[(c_{\beta}\partial_{\psi_{1}}+s_{\beta}\partial_{\psi_{2}})(c_{ \beta}\partial_{\rho_{2}}-s_{\beta}\partial_{\rho_{1}})=0,\] (124) \[(c_{\beta}\partial_{\psi_{1}}+s_{\beta}\partial_{\psi_{2}})(c_{ \beta}\partial_{\eta_{2}}-s_{\beta}\partial_{\eta_{1}})=0,\] (125) \[(c_{\beta}\partial_{\rho_{2}}-s_{\beta}\partial_{\rho_{1}})(c_{ \beta}\partial_{\eta_{2}}-s_{\beta}\partial_{\eta_{1}})=0, \tag{126}\]
\[(c_{\beta}\partial_{\rho_{2}}-s_{\beta}\partial_{\rho_{1}})(c_{ \beta}\partial_{\zeta_{2}}-s_{\beta}\partial_{\zeta_{1}})=0, \tag{104}\] \[(c_{\beta}\partial_{\rho_{2}}-s_{\beta}\partial_{\rho_{1}})(c_{ \beta}\partial_{\psi_{2}}-s_{\beta}\partial_{\psi_{1}})=0,\] (105) \[(c_{\beta}\partial_{\eta_{2}}-s_{\beta}\partial_{\eta_{1}})(c_{ \beta}\partial_{\zeta_{2}}-s_{\beta}\partial_{\zeta_{1}})=0,\] (106) \[(c_{\beta}\partial_{\eta_{2}}-s_{\beta}\partial_{\eta_{1}})(c_{ \beta}\partial_{\psi_{2}}-s_{\beta}\partial_{\psi_{1}})=0,\] (107) \[(c_{\beta}\partial_{\rho_{1}}+s_{\beta}\partial_{\rho_{2}})(c_{ \beta}\partial_{\rho_{1}}+s_{\beta}\partial_{\rho_{2}})=(c_{\beta}\partial_{ \eta_{1}}+s_{\beta}\partial_{\eta_{2}})(c_{\beta}\partial_{\eta_{1}}+s_{\beta }\partial_{\eta_{2}}),\] (108) \[(c_{\beta}\partial_{\rho_{1}}+s_{\beta}\partial_{\rho_{2}})(c_{ \beta}\partial_{\rho_{1}}+s_{\beta}\partial_{\rho_{2}})=(c_{\beta}\partial_{ \psi_{1}}+s_{\beta}\partial_{\psi_{2}})(c_{\beta}\partial_{\psi_{1}}+s_{\beta }\partial_{\psi_{2}}),\] (109) \[(c_{\beta}\partial_{\rho_{2}}-s_{\beta}\partial_{\rho_{1}})(c_{ \beta}\partial_{\rho_{2}}-s_{\beta}\partial_{\rho_{1}})=(c_{\beta}\partial_{ \eta_{2}}-s_{\beta}\partial_{\eta_{1}})(c_{\beta}\partial_{\eta_{2}}-s_{\beta }\partial_{\eta_{1}}),\] (110) \[(c_{\beta}\partial_{\rho_{1}}+s_{\beta}\partial_{\rho_{2}})(c_{ \beta}\partial_{\rho_{2}}-s_{\beta}\partial_{\rho_{1}})=(c_{\beta}\partial_{ \eta_{1}}+s_{\beta}\partial_{\eta_{2}})(c_{\beta}\partial_{\eta_{2}}-s_{\beta }\partial_{\eta_{1}}),\] (111) \[(c_{\beta}\partial_{\rho_{1}}+s_{\beta}\partial_{\rho_{2}})(c_{ \beta}\partial_{\rho_{2}}-s_{\beta}\partial_{\rho_{1}})=(c_{\beta}\partial_{ \psi_{1}}+s_{\beta}\partial_{\psi_{2}})(c_{\beta}\partial_{\psi_{2}}-s_{\beta }\partial_{\psi_{1}}),\] (112) \[(c_{\beta}\partial_{\eta_{1}}+s_{\beta}\partial_{\eta_{2}})(c_{ \beta}\partial_{\rho_{2}}-s_{\beta}\partial_{\rho_{1}})=-(c_{\beta}\partial_{ \rho_{1}}+s_{\beta}\partial_{\rho_{2}})(c_{\beta}\partial_{\eta_{2}}-s_{\beta }\partial_{\eta_{1}}),\] (113) \[(c_{\beta}\partial_{\eta_{1}}+s_{\beta}\partial_{\eta_{2}})(c_{ \beta}\partial_{\rho_{2}}-s_{\beta}\partial_{\rho_{1}})=(c_{\beta}\partial_{ \psi_{1}}+s_{\beta}\partial_{\psi_{2}})(c_{\beta}\partial_{\zeta_{2}}-s_{\beta }\partial_{\zeta_{1}}). \tag{114}\]
Then, there are 10 independent renormalization conditions from the second derivatives. However, three of them are satisfied automatically when the renormalization conditions from the first derivatives are satisfied, because of the following identities,
\[(c_{\beta}\partial_{\rho_{1}}+s_{\beta}\partial_{\rho_{2}})^{2}= \frac{1}{2v}(c_{\beta}\partial_{\zeta_{1}}+s_{\beta}\partial_{\zeta_{2}}), \tag{115}\] \[(c_{\beta}\partial_{\rho_{1}}+s_{\beta}\partial_{\rho_{2}})(c_{ \beta}\partial_{\rho_{2}}-s_{\beta}\partial_{\rho_{1}})=\frac{1}{2v}(c_{\beta} \partial_{\zeta_{2}}-s_{\beta}\partial_{\zeta_{1}}),\] (116) \[(c_{\beta}\partial_{\rho_{1}}+s_{\beta}\partial_{\rho_{2}})(c_{ \beta}\partial_{\eta_{2}}-s_{\beta}\partial_{\eta_{1}})=\frac{1}{2v}(c_{\beta} \partial_{\psi_{2}}-s_{\beta}\partial_{\psi_{1}}). \tag{117}\]
Finally, we are left with only 7 independent renormalization conditions from Eq. (C3),
\[(c_{\beta}\partial_{\rho_{2}}-s_{\beta}\partial_{\rho_{1}})^{2}(V _{\rm CT}+V_{\rm CW})=0, \tag{118}\] \[(c_{\beta}\partial_{\zeta_{1}}+s_{\beta}\partial_{\zeta_{2}})^{2}(V _{\rm CT}+V_{\rm CW})=0,\] (119) \[(c_{\beta}\partial_{\zeta_{2}}-s_{\beta}\partial_{\zeta_{1}})^{2}(V _{\rm CT}+V_{\rm CW})=0,\] (120) \[(c_{\beta}\partial_{\psi_{2}}-s_{\beta}\partial_{\psi_{1}})^{2}(V _{\rm CT}+V_{\rm CW})=0,\] (121) \[(c_{\beta}\partial_{\zeta_{1}}+s_{\beta}\partial_{\zeta_{2}})(c_{ \beta}\partial_{\zeta_{2}}-s_{\beta}\partial_{\zeta_{1}})(V_{\rm CT}+V_{\rm CW})=0,\] (122) \[(c_{\beta}\partial_{\zeta_{1}}+s_{\beta}\partial_{\zeta_{2}})(c_{ \beta}\partial_{\psi_{2}}-s_{\beta}\partial_{\psi_{1}})(V_{\rm CT}+V_{\rm CW})=0,\] (123) \[(c_{\beta}\partial_{\zeta_{2}}-s_{\beta}\partial_{\zeta_{1}})(c_{ \beta}\partial_{\psi_{2}}-s_{\beta}\partial_{\psi_{1}})(V_{\rm CT}+V_{\rm CW})=0. \tag{124}\]
And we have 10 independent renormalization conditions from Eqs. (C9)-(C11) and Eqs. (C1)
(C47) in total.
| ```
2HDMのorbita空間におけるフレームワークを拡張し、電弱対称性破れの前の、後の1ループ有効ポテンシャルを研究する。このフレームワークでは、2HDMの1ループの熱有効ポテンシャルのグローバル対称性の包括的な分析を行い、2HDMのツリーレベルのグローバル対称性がループ寄与によって破られるタイミングを示す。
光円偏角を導入し、真空周辺の双線形表記を一般化することで、スカラマッサーとオン・シェル renormalization 条件の幾何学的視点を与えている。
``` |
2310.07861 | Non-isothermal nonlocal phase-field models with a double-obstacle
potential | Phase-field models are a popular choice in computational physics to describe
complex dynamics of substances with multiple phases and are widely used in
various applications. We present nonlocal non-isothermal phase-field models of
Cahn-Hilliard and Allen-Cahn types involving a nonsmooth double-well obstacle
potential. Mathematically, in a weak form, the model translates to a system of
variational inequalities coupled to a temperature evolution equation. We
demonstrate that under certain conditions and with a careful choice of the
nonlocal operator one can obtain a model that allows for sharp interfaces in
the solution that evolve in time, which is a desirable property in many
applications. This can be contrasted to the diffuse-interface local models that
can not resolve sharp interfaces. We present the well-posedness analysis of the
models, discuss an appropriate numerical discretization scheme, and supplement
our findings with several numerical experiments. | Olena Burkovska | 2023-10-11T20:04:49 | http://arxiv.org/abs/2310.07861v1 | # Non-isothermal nonlocal phase-field models with a double-obstacle potential
###### Abstract.
Phase-field models are a popular choice in computational physics to describe complex dynamics of substances with multiple phases and are widely used in various applications. We present nonlocal non-isothermal phase-field models of Cahn-Hilliard and Allen-Cahn types involving a nonsmooth double-well obstacle potential. Mathematically, in a weak form, the model translates to a system of variational inequalities coupled to a temperature evolution equation. We demonstrate that under certain conditions and with a careful choice of the nonlocal operator one can obtain a model that allows for sharp interfaces in the solution that evolve in time, which is a desirable property in many applications. This can be contrasted to the diffuse-interface local models that can not resolve sharp interfaces. We present the well-posedness analysis of the models, discuss an appropriate numerical discretization scheme, and supplement our findings with several numerical experiments.
Key words and phrases:phase-field models, nonlocal operators, variational inequality, well-posedness, sharp interfaces, regularity, finite elements 2010 Mathematics Subject Classification: 45K05; 35K55; 35B65; 49J40; 65M60; 65K15 This material is based upon work supported by the U.S. Department of Energy, Office of Advanced Scientific Computing Research, Applied Mathematics Program under the award numbers ERKJ345 and ERKJE45; and was performed at the Oak Ridge National Laboratory, which is managed by UT-Battelle, LLC under Contract No. De-AC05-00OR22725. The US government retains and the publisher, by accepting the article for publication, acknowledges that the US government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for US government purposes. DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan ([https://www.energy.gov/doe-public-access-plan](https://www.energy.gov/doe-public-access-plan)).
to ensure a good approximation to the sharp-interface solution, very fine meshes and/or more structurally complex models are needed.
In this paper, we propose a novel non-isothermal phase-field model based on a nonlocal phase-field system coupled to a temperature evolution equation. The non-locality in the phase-field model allows to provide sharper interfaces in the solution independently of the mesh resolution and can help to alleviate the limitations of resolving thin diffuse interfaces in the local setting.
More specifically, given a bounded domain \(\varOmega\subset\mathbb{R}^{n}\), \(n\leq 3\) we consider the model
\[\begin{cases}\partial_{t}\theta-D\Delta\theta-L\partial_{t}u=0,\\ \mu\mathcal{G}\left(\partial_{t}u\right)+Bu+F^{\prime}_{u}(u,\theta)=0,\end{cases} \quad\text{in}\quad(0,T)\times\varOmega,\quad T>0, \tag{1.1}\]
where \(\theta(t,x)\) is the temperature and \(u(t,x)\) is the order parameter, which typically assumes values in \([0,1]\) with \(u=0\) and \(u=1\) corresponding to pure phases, e.g., liquid and solid, and \(F(u,\theta)\) is a double-well potential that encourages the solution to admit pure phases. Here, we assume it admits the following form
\[F(u,\theta)=F_{0}(u)+m(\theta)g(u), \tag{1.2}\]
where \(F_{0}(u)\) is a double-well function that has minimum at \(u=0\) and \(u=1\) and a coupling term \(m(\theta)\) forces the solution to attain one phase over another depending on the value of the temperature \(\theta\) respective to some equilibrium temperature \(\theta_{e}\). The parameters \(D\), \(L\) and \(\mu\) are the diffusivity, latent heat and relaxation time coefficients, respectively. The operator \(B\) is a nonlocal diffusion operator
\[Bu=\int_{\mathbb{R}^{n}}(u(x)-u(y))\gamma(x-y)\,\mathrm{d}y,\]
where \(\gamma(x-y)\) is a compactly supported integrable kernel that defines the nature and extent of nonlocal interactions. Additionally, \(\mathcal{G}\) is the Green's function of the operator \(I-\beta\Delta\), where \(\beta\geq 0\) and \(I\) is an identity operator. When \(\beta=0\), we obtain a nonlocal Allen-Cahn equation, while \(\beta>0\) corresponds to a non-mass conserving Cahn-Hilliard type equation. We note that under appropriate conditions on the kernel we recover a local operator in the limit of vanishing nonlocal interactions, see, e.g., [23],
\[Bu\to-\varepsilon^{2}\Delta u,\quad\text{for}\quad\varepsilon^{2}=\frac{1}{2n }\int_{\mathbb{R}^{n}}\gamma(\zeta)|\zeta|^{2}\,\mathrm{d}\zeta,\]
where \(\varepsilon\) is parameter that controls the width of the interface in the local model. Then, the local analogue of the model (1.1) is obtained having \(Bu=-\varepsilon^{2}\Delta u\), and taking \(\mathcal{G}=I\) (i.e, \(\beta=0\)), results in a prototypical isotropic model for solidification of pure materials, see, e.g., [35, 11, 40, 41, 39] and references therein. For an overview of nonlocal and local phase-field models we refer to [3, 28, 21, 4, 39].
Our main contribution is to demonstrate that the proposed model (1.1) can deliver solutions with time-evolving sharp interfaces and well-defined nonlocal interfacial energy, which is, to the best of our knowledge, the first result in this direction. The following is essential for this: First, the fact that we employ nonlocal operators with integrable kernels and an obstacle potential is necessary to allow sharp transitions between pure phases. Second, the inclusion of the Green's operator is responsible of allowing evolution of discontinuous interfaces in time. For example, considering the seemingly more natural \(\mathcal{G}=I\), which corresponds to a nonlocal Allen-Cahn model, in general does not permit discontinuities to evolve in time due to the fact that \(Bu(t)\in L^{2}(\varOmega)\) and the associated higher temporal regularity \(\partial_{t}u\in L^{2}(\varOmega)\). As an illustration, consider the example of a moving discontinuity with a constant velocity \(\nu>0\), given with a Heaviside function \(H\),
\[u(x,t)=H(x-\nu t),\quad x\in(0,1)\subset\mathbb{R}^{1},\quad t>0.\]
It is clear that \(\partial_{t}u=-\delta_{0}(x-\nu t)\) is a distribution and it is not in \(L^{2}(\varOmega)\), whereas \(\mathcal{G}(\partial_{t}u)\) is an element of \(L^{2}(\varOmega)\). Therefore, the proposed model (1.1) with \(\mathcal{G}(\partial_{t}u)\) relaxes temporal regularity and allows discontinuities to evolve in time. For this purpose, we provide analysis of the time-discrete and time continuous formulations of the model and derive conditions under which sharp interfaces can be attained.
This work will contribute to the growing literature dealing with the nonlocal phase-field models that has gained interest in recent years; see, e.g., [14, 24, 23, 31, 30, 25] and references therein. Many of those works consider an isothermal setting, i.e., assuming a constant temperature in the system. The non-isothermal case often has been studied for the nonlocal model of Caginalp type [15] that resembles a model structure (1.1) with \(\mathcal{G}=I\) and a linear coupling term (\(m(\theta)=\theta\), \(g(u)=u\)), see, [1, 33, 2, 27, 7, 17]. In contrast, in this work we adopt a nonlinear coupling \(m(\theta)\) in (1.2) similar to the one used in the local Kobayashi model [35]. Furthermore, we employ an obstacle-type potential \(F_{0}(u)\) involving an indicator function, which enforces the solution to always conform to the bounds \(0\leq u\leq 1\), which can be contrasted with the most commonly used regular potential that allows non-admissible solutions (see Section 2). Such structure of the potential introduces an additional non-smoothness into the system and a more care is required for the analysis and numerical treatment of the model. This work builds upon our previous work on the Cahn-Hilliard model with the obstacle potential [14] and introduces a novel non-isothermal model with a non-mass conserving phase-field variable. While we adopt similar settings for the potential and nonlocal operator, the analysis of the model is more complicated due to the presence of the temperature and nonlinear coupling term.
We comment on related works that adopt obstacle type potentials. Local phase-field models with non-smooth potentials have been investigated in various works, see, e.g., [10, 20] among others. For the non-isothermal nonlocal case, a Caginalp type model with the obstacle potential has been analyzed in [17] and extended to the optimal control problem in [19]. The nonlocal operators considered in those works are of a fractional type defined via spectral theory, which is different from the bounded nonlocal operators adopted in this work. The study of different nonlocal non-isothermal models with a non-smooth potential has been also conducted in [37, 36].
The structure of the potential plays a pivotal role to guarantee sharp interfaces in the solution. Specifically, adopting an obstacle potential in the nonlocal phase-field system allows to obtain sharp-interfaces with only pure phases, i.e., \(u\in\{0,1\}\), as it has been demonstrated in [14] and will be established here for the non-isothermal case (1.1). As pointed out before, in general those properties do not hold for nonlocal Allen-Cahn systems unless in a steady-state case. For example, in [25] it has been proved that a steady-state solution of the nonlocal Allen-Cahn equation with a regular potential can admit discontinuities, but due to the smooth nature of the potential the size of the jump is strictly smaller than one, confirming that a small transition layer occurs between pure phases. Overall, the question of the sharpness of the interface in the solution of a nonlocal Allen-Cahn equation has been investigated in various works, see, e.g., [25, 26, 24, 38, 6, 29, 5, 16].
In addition to the analysis of model (1.1), we also present spatial and temporal discretization methods. For the latter, we propose an implicit-explicit time stepping scheme that provides an efficient evaluation of the model, and in the Allen-Cahn case (\(\beta=0\)) allows to completely bypass a solution of a nonlinear phase-field system.
While the present work discusses the mathematical formulation of the model, in our forthcoming work [12] we will address several questions related to the performance of the model, asymptotic analysis and comparative study of the model in the context of solidification of pure materials.
The structure of the paper is as follows. In Section 2 we introduce the nonlocal model and briefly discuss related local models. Next, in Section 3 we formulate a model in a functional analytic framework. In the subsequent Section 4 we introduce a time-stepping scheme and derive the well-posedness result of the corresponding semi-discrete problem together with the regularity and the sharp-interface properties of the solution. Analyzing the semi-discrete problem for vanishing temporal step size we derive the existence result in Section 5 for the continuous setting. Finally, in Section 6 we discuss the fully discrete scheme and illustrate our theoretical results with several one- and two-dimensional examples. We present some concluding remarks in Section 7.
## 2. Model setting
We introduce a nonlocal operator \(B\), defined as follows
\[Bu(x)=\int_{\varOmega\cup\varOmega_{I}}(u(x)-u(y))\gamma(x-y)\,\mathrm{d}y=c_{ \gamma}u(x)-(\gamma*u)(x),\quad x\in\varOmega, \tag{2.1}\]
where a nonlocal kernel \(\gamma:\mathbb{R}^{n}\to\mathbb{R}^{+}\) is radial integrable and compactly supported
\[\gamma\in L^{1}(\mathbb{R}^{n}),\quad\gamma(x)=\hat{\gamma}(|x|)\;\;\text{ and}\quad(0,\sigma)\subset\text{supp}(\hat{\gamma})\subset(0,\delta],\;\delta>0,\; \sigma>0, \tag{2.2}\]
where the nonlocal interaction radius \(\delta>0\) defines an extent of nonlocal interactions and \(\varOmega_{I}\) is a nonlocal interaction domain that incorporates interactions outside of \(\varOmega\):
\[\varOmega_{I}:=\{y\in\mathbb{R}^{n}\setminus\varOmega\colon\gamma(x,y)\neq 0,\quad x\in\varOmega\}.\]
Here, by \(\gamma*u\) we denote the convolution of \(\gamma\) and \(u\) on \(\varOmega\cup\varOmega_{I}\), and1
Footnote 1: For convenience of notation we suppose that \(u\) is extended by zero outside of \(\varOmega\cup\varOmega_{I}\).
\[c_{\gamma}(x):=\int_{\varOmega\cup\varOmega_{I}}\gamma(x-y)\,\mathrm{d}y, \quad C_{\gamma}:=\int_{\mathbb{R}^{n}}\gamma(x-y)\,\mathrm{d}y, \tag{2.3}\]
where \(c_{\gamma}=c_{\gamma}(x)\) is constant for all \(x\in\varOmega\) and \(0\leq c_{\gamma}(x)\leq C_{\gamma}<\infty\). We consider
\[\begin{cases}\partial_{t}\theta=\;D\Delta\theta+L\partial_{t}u,\\ \mu\partial_{t}u+Aw=0,\\ w=Bu+\partial_{u}F(u,\theta),\end{cases}\quad\text{in}\quad(0,T)\times \varOmega,\quad T>0, \tag{2.4}\]
where as before \(D>0\), \(L\), \(\mu>0\) are a diffusivity, latent heat and relaxation time parameters, respectively. A double well-potential \(F(u,\theta)\) is of the form (1.2) and we use \(\partial_{u}F(u,\theta)\) to denote a subdifferential of a non-smooth potential \(F\) which we introduce shortly. Here, \(A:=I-\beta\Delta\), where \(\beta\geq 0\) plays a role of a "de-regularization" parameter for a time derivative \(\partial u/\partial t\). We complement the system (2.4) with the following initial and boundary conditions:
\[u(0)=u^{0},\quad\theta(0)=\theta^{0},\] \[\partial_{n}w=0\quad(\beta>0)\quad\text{and}\quad\partial_{n} \theta=0\quad\text{on}\quad\partial\varOmega,\quad\mathcal{N}u=0\quad\text{ on}\quad\varOmega_{I},\]
where \(n\) denotes an outward normal to \(\partial\varOmega\) and \(\mathcal{N}u\) is a nonlocal flux condition on \(\varOmega_{I}\), which is analogous to the local Neumann type boundary condition:
\[\mathcal{N}u(x):=\int_{\varOmega\cup\varOmega_{I}}(u(x)-u(y))\gamma(x-y)\, \mathrm{d}y=0,\quad\forall x\in\varOmega_{I}.\]
For \(\beta=0\), (2.4) reduces to the nonsiothermal nonlocal Allen-Cahn equation:
\[\begin{cases}\partial_{t}\theta=\ D\Delta\theta+L\partial_{t}u,\\ \mu\partial_{t}u+Bu+\partial_{u}F(u,\theta)=0,\end{cases} \tag{2.5}\]
which is a nonlocal analogue of an isotropic version of the local model for the solidification of pure materials, see, e.g., [35, 40, 11] and references therein,
\[\begin{cases}\partial_{t}\theta=D\Delta\theta+L\partial_{t}u\\ \mu\partial_{t}u-\varepsilon^{2}\Delta u+F^{\prime}_{u}(u,\theta)=0,\end{cases} \tag{2.6}\]
where \(\varepsilon\) is an interface parameter.
### Potential
The most common choice of \(F(u)\) is a smooth double well potential
\[F(u,\theta)=\frac{1}{4}u^{2}(1-u)^{2}+m(\theta)\left(\frac{1}{3}u^{3}-\frac{1 }{2}u^{2}\right), \tag{2.7}\]
which can be considered as a smooth approximation of the more physically relevant logarithmic potential
\[F(u,\theta)=\frac{c_{F}}{2}u(1-u)+\frac{\sigma}{2}\left(u\ln(u)+(1-u)\ln(1-u) \right)-c_{F}m(\theta)u,\quad u\in(0,1), \tag{2.8}\]
for \(0<\sigma<c_{F}\), or the obstacle potential
\[F(u,\theta):=\frac{c_{F}}{2}u(1-u)+\mathbb{I}_{[0,1]}(u)-c_{F}m(\theta)u, \quad c_{F}>0. \tag{2.9}\]
Here, \(\mathbb{I}_{[0,1]}(u)\) is a convex indicator function of an admissible range \([0,1]\) and \(c_{F}>0\) is an appropriate scaling constant; see Figure 1 for the illustrations of those potentials.
In contrast to the logarithmic potential, the obstacle potential allows the solution to attain pure phases \(u=0\) and \(u=1\). However, this potential is not differentiable in a classical sense and one must resort to the notion of subdifferentials. More specifically, we define the generalized differential of \(F(u,\theta)\) with respect to \(u\) by
\[\partial_{u}F(u,\theta)=-c_{F}u+\frac{c_{F}}{2}-c_{F}m(\theta)+\partial\mathbb{ I}_{[0,1]}(u), \tag{2.10}\]
where \(\partial\mathbb{I}_{[0,1]}(u)\) is a subdifferential of the indicator function
\[\partial\mathbb{I}_{[0,1]}(u)=\begin{cases}(-\infty,0]&\text{if }u=0,\\ 0&\text{for }u\in(0,1),\\ [0,+\infty)&\text{if }u=1.\end{cases}\]
Furthermore, it is necessary that \(F\) in (1.2) admits two local minima at \(u=0\) and \(u=1\) irrespective of the value of the temperature \(\theta\). To ensure this in case of a smooth potential the following first and second order optimality conditions should hold:
\[\frac{\partial F}{\partial u}|_{u=0,1}=F^{\prime}_{0}(u)+m(\theta )g^{\prime}(u)|_{u=0,1}=0,\] \[\frac{\partial^{2}F}{\partial u^{2}}|_{u=0,1}=F^{\prime\prime}_{ 0}(u)+m(\theta)g^{\prime\prime}(u)|_{u=0,1}=0.\]
In case of the regular potential (2.7) the above holds true if \(m(\theta)\) is chosen such that \(|m(\theta)|<1/2\). For a non-smooth potential with the bound constraints such as (2.9) the necessary conditions are
\[0\in\partial_{u}F(u,\theta)|_{u=0,1},\]
which is equivalent to \(c_{F}/2-c_{F}m(\theta)\geq 0\) (\(u=0\)) and \(-c_{F}/2-c_{F}m(\theta)\leq 0\) (\(u=1\)). In case of a strict inequality it is also a sufficient condition, which reduces to the same requirement \(|m(\theta)|<1/2\).
The coupling term \(m(\theta)\), is usually chosen as a function of a temperature, which is almost linear around \(\theta=\theta_{e}\). In particular, in the Caginalp model [15] it is set to be linear, whereas in the Kobayashi model [35] it is chosen as a nonlinear function of \(\theta\):
\[m(\theta)=\left(\frac{\alpha}{\pi}\right)\tan^{-1}\left(\rho(\theta_{e}- \theta)\right), \tag{2.11}\]
with an assumption that \(0<\alpha<1\), which ensures that \(|m(\theta)|<1/2\). In the present exposition we assume that the function \(m\) in (2.9) is uniformly bounded and uniformly Lipschitz continuous in \(\mathbb{R}\), i.e.,
\[|m(v)|\leq 1/2\qquad\text{and}\qquad|m^{\prime}(v)|\leq C_{m},\quad\forall v \in\mathbb{R}. \tag{2.12}\]
For example, using \(m\) from [35] as defined in (2.11) fulfills the above requirements.
Now, invoking the definition of the nonlocal operator (2.1) and \(\partial_{u}F\) (2.10), the system (2.4) can be stated as follows
\[\begin{cases}\partial_{t}\theta=\ D\Delta\theta+L\partial_{t}u,\\ \mu\partial_{t}u+Aw=0\\ w=\xi u-\gamma*u+\frac{c_{F}}{2}-c_{F}m(\theta)+\lambda,\quad\lambda\in \partial\mathbb{I}_{[0,1]}(u),\end{cases} \tag{2.13}\]
where \(\xi=\int_{I\sqcup\Omega_{I}}\gamma(x-y)\,\mathrm{d}y-c_{F}\) is the nonlocal interface parameter that plays a similar role to the interface parameter \(\varepsilon^{2}\) in the local settings in (2.6), meaning that the larger values of \(\xi\) lead to a more diffuse interface, cf. [14]. However, in contrast to the local settings where a thin interface is obtained only in the limiting case \(\varepsilon\to 0\), here we show that the sharp interface with only pure phases can appear for \(\xi=0\).
## 3. Preliminaries
We denote by \(L^{p}(\Omega)\) and by \(W^{k,p}(\Omega)\), \(p\in[1,\infty]\), \(k\in\mathbb{N}\), the usual Lebesgue and Sobolev spaces endowed with the norms \(\left\|\cdot\right\|_{L^{p}(\Omega)}\) and \(\left\|\cdot\right\|_{W^{1,1}(\Omega)}\). We also denote by \((\cdot,\cdot)\) and \(\left\|\cdot\right\|\) an inner product and a norm on \(L^{2}(\Omega)\) and set \(H^{1}(\Omega)=W^{1,2}(\Omega)\) with \(\left\|\cdot\right\|_{H^{1}(\Omega)}^{2}=\left\|\nabla v\right\|^{2}+\left\|v \right\|^{2}\), and recall the Poincare inequality
\[\left\|v-v^{\Omega}\right\|\leq C_{P}\|\nabla v\|,\quad\forall v\in H^{1}( \Omega), \tag{3.1}\]
where \(v^{\Omega}:=\frac{1}{|\Omega|}\int_{\Omega}v(x)\,\mathrm{d}x\). For a reflexive Banach space \(X\) we denote its dual by \(X^{\prime}\) and by \(\left\langle\cdot,\cdot\right\rangle_{X}\) the corresponding duality pairing. For \(T>0\) we introduce
Figure 1. Illustration for the regular (blue), logarithmic (black) and obstacle potentials (red).
space-time cylinders \(Q:=(0,T)\times\varOmega\) and \(\hat{Q}:=(0,T)\times(\varOmega\cup\varOmega_{I})\). For notational convenience, we set \(V_{\theta}:=H^{1}(\varOmega)\) with \(\left\|v\right\|_{V_{\theta}}:=\left\|v\right\|_{H^{1}(\varOmega)}\), and let
\[V_{A}:=\{v\in L^{2}(\varOmega)\colon\left\|v\right\|_{V_{A}}<\infty\},\quad \text{where}\quad\left\|v\right\|_{V_{A}}^{2}:=\beta\|\nabla v\|^{2}+\left\|v \right\|^{2},\quad\beta\geq 0,\]
with \((u,v)_{V_{A}}:=\beta(\nabla u,\nabla v)+(u,v)\). It is clear, that for \(\beta=0\), \(V_{A}=L^{2}(\varOmega)\), while for \(\beta>0\), \(V_{A}\cong H^{1}(\varOmega)\) and the following norm equivalence holds
\[\min(1,\beta)\|v\|_{H^{1}(\varOmega)}^{2}\leq\|v\|_{V_{A}}^{2}\leq\max(1,\beta )\|v\|_{H^{1}(\varOmega)}^{2}. \tag{3.2}\]
We introduce a nonlocal space \(V_{B}\) according to [14]:
\[V_{B}:=\left\{v\in L^{2}(\varOmega\cup\varOmega_{I})\colon\mathcal{N}v=0 \text{ on }\varOmega_{I}\right\}\cong L^{2}(\varOmega),\]
and recall the following nonlocal Green's first identity [22]:
\[(Bu,v)=\frac{1}{2}\int_{\varOmega\cup\varOmega_{I}}\int_{\varOmega\cup \varOmega_{I}}(u(x)-u(y))(v(x)-v(y))\gamma(x-y)\,\mathrm{d}y\,\mathrm{d}x+(v, \mathcal{N}u)_{L^{2}(\varOmega_{I})}.\]
We note the condition \(\mathcal{N}v=0\) on \(\varOmega_{I}\) can be also understood as an "exterior" nonlocal problem posed on \(\varOmega_{I}\) with volume constraints \(v=g\) on \(\varOmega\), for which it holds [14]
\[\left\|v\right\|_{L^{2}(\varOmega_{I})}\leq C_{I}\|g\|, \tag{3.3}\]
where the constant \(C_{I}\) depends only on \(\gamma\), and \(\varOmega\), \(\varOmega_{I}\). Next, we introduce a Green's operator \(\mathcal{G}:V_{A}^{\prime}\to V_{A}\) of the operator \(A=(I-\beta\Delta)\), \(\beta>0\), with the Neumann boundary conditions. Then, for a given \(u\in V_{A}^{\prime}\) it solves
\[(A\mathcal{G}u,v)=\left\langle u,v\right\rangle_{V_{A}},\quad\forall v\in V_ {A}, \tag{3.4}\]
i.e., \(\mathcal{G}=A^{-1}\), and the following holds true
\[\left\|u\right\|_{V_{A}^{\prime}}^{2}=\left\|\mathcal{G}u\right\|_{V_{A}}^{2} =\left\langle u,\mathcal{G}u\right\rangle_{V_{A}}. \tag{3.5}\]
For kernels satisfying (2.2) we recall the following properties [14, Proposition 3.3].
**Proposition 3.1**.: _For \(\gamma\) satisfying (2.2) and for \(\eta>0\) there exists a function \(\gamma_{\eta}:\mathbb{R}^{n}\to\mathbb{R}^{+}\) satisfying (2.2) and \(\gamma_{\eta}\in W^{1,1}(\mathbb{R}^{n})\) and a constant \(C_{\eta}>0\), such that_
\[\left\|\nabla\gamma_{\eta}\right\|_{L^{1}(\mathbb{R}^{n})}\leq C_{\eta},\quad \text{and}\quad\left\|\gamma-\gamma_{\eta}\right\|_{L^{1}(\mathbb{R}^{n})} \leq\eta,\quad C_{\eta}>0. \tag{3.6}\]
_Furthermore, the sequence \(\gamma_{\eta}\ast u\to\gamma\ast u\) converges uniformly in \(L^{\infty}(\varOmega\cup\varOmega_{I})\) for any \(u\in L^{\infty}(\varOmega\cup\varOmega_{I})\), and the limiting function is continuous_
\[\gamma\ast u\in C(\varOmega\cup\varOmega_{I}),\quad\forall u\in L^{\infty}( \varOmega\cup\varOmega_{I}). \tag{3.7}\]
### Variational formulation
We introduce a set of admissible solutions
\[\mathcal{K}:=\{v\in V_{B}\colon 0\leq v\leq 1\,\text{ a. e. in }\,\varOmega\}. \tag{3.8}\]
Then, a weak form of (2.13) is to find \(\left(\theta(t),u(t),w(t)\right)\in V_{\theta}\times\mathcal{K}\times V_{A}\) such that
\[\left\langle\partial_{t}\theta(t),\phi\right\rangle_{V_{\theta}}+D( \nabla\theta(t),\nabla\phi)-L\langle\partial_{t}u(t),\phi\rangle_{V_{\theta}} =0,\quad\forall\phi\in V_{\theta}, \tag{3.9}\] \[\left.\mu\langle\partial_{t}u(t),\psi\rangle_{V_{A}}+\left\langle Aw (t),\psi\right\rangle_{V_{A}}=0,\quad\forall\psi\in V_{A},\] \[\left(Bu(t)-u(t)+c_{F}/2-m(\theta(t))-w(t),\zeta-u(t)\right)\geq 0, \quad\forall\zeta\in\mathcal{K},\]
subject to \(u(0)=u^{0}\in\mathcal{K}\) and \(\theta(0)=\theta^{0}\in V_{\theta}\). We define a positive cone
\[M=\{v\in L^{2}(\varOmega)\colon v\geq 0\text{ a. e. on }\varOmega\},\]
and by introducing a Lagrange multiplier \(\lambda(t)=\lambda_{+}(t)-\lambda_{-}(t)\) with \(\lambda_{\pm}(t)\in M\), we can restate the above variational inequality as a system of complementarity conditions: Find \((\theta(t),u(t),w(t),\lambda_{\pm}(t))\in V_{\theta}\times V_{B}\times V_{A}\times M\) such that
\[\begin{split}(Bu(t)-u(t)+c_{F}/2-c_{F}m(\theta(t))-w(t)+\lambda(t ),\zeta)=0,\quad\forall\zeta\in V_{B},\\ (\eta-\lambda_{+}(t),1-u(t))\geq 0,\quad(\eta-\lambda_{-}(t),u(t)) \geq 0,\quad\forall\eta\in M.\end{split} \tag{3.10}\]
Furthermore, by taking \(\eta=\pm\lambda_{\pm}(t)\) and \(\eta=0\) in the above inequalities it is easy to show that the following property holds true for all \(\eta\in M\):
\[(\eta,1-u(t))\geq 0,\ \ (\lambda_{+}(t),1-u(t))=0\ \ \text{and}\ \ (\eta,u(t))\geq 0,\ \ (\lambda_{-}(t),u(t))=0. \tag{3.11}\]
An equivalent time-integrated version of (3.9),(3.10) is to find \(\theta\in L^{2}(0,T;V_{\theta})\), \(u\in L^{\infty}(0,T;V_{B})\), \(w\in L^{2}(0,T;V_{A})\), \(\lambda_{\pm}\in L^{2}_{+}(Q)=\{v\in L^{2}(Q)\colon v\geq 0\text{ a.e. }Q\}\) such that \(\partial_{t}u\in L^{2}(0,T;V_{A}^{\prime})\), \(\partial_{t}\theta\in L^{2}(0,T;V_{\theta}^{\prime})\), \(u(0)=u^{0}\), \(\theta(0)=\theta^{0}\) and
\[\int_{0}^{T}\big{(}\langle\partial_{t}\theta,\phi\rangle_{V_{ \theta}}+D(\nabla\theta,\nabla\phi)-L\langle\partial_{t}u,\phi\rangle_{V_{ \theta}}\big{)}\,\mathrm{d}t=0,\quad\forall\phi\in L^{2}(0,T;V_{\theta}),\] \[\int_{0}^{T}\big{(}\mu\langle\partial_{t}u,\psi\rangle_{V_{A}}+ \langle Aw,\psi\rangle_{V_{A}}\big{)}\,\mathrm{d}t=0,\quad\forall\psi\in L^{2 }(0,T;V_{A}), \tag{3.12}\] \[(Bu-u+c_{F}/2-c_{F}m(\theta)-w(t)+\lambda,\zeta)_{L^{2}(Q)}=0, \quad\forall\zeta\in L^{2}(0,T;V_{B}),\] \[(\eta-\lambda_{+},1-u)_{L^{2}(Q)}\geq 0,\quad(\eta-\lambda_{-},u)_{ L^{2}(Q)}\geq 0,\quad\forall\eta\in L^{2}_{+}(Q).\]
## 4. Well-posedness of a time-discrete formulation
Next, we discretize (3.9) in time and analyze the corresponding problem using an optimization approach.
For \(T>0\) and \(K\in\mathbb{N}\), we define \(\tau=T/K\), \(t^{k}=k\tau\), \(k=1,\ldots,K\). Then, given \(\theta^{0}\in V_{\theta}\), \(u^{0}\in\mathcal{K}\), we seek \((\theta^{k},u^{k},w^{k})\in V_{\theta}\times\mathcal{K}\times V_{A}\) such that
\[\big{(}\theta^{k}-\theta^{k-1},\phi\big{)}+D\tau\big{(}\nabla \theta^{k},\nabla\phi\big{)}-L\big{(}u^{k}-u^{k-1},\phi\big{)}=0,\quad\forall \phi\in V_{\theta} \tag{4.1a}\] \[\qquad\mu\big{(}u^{k}-u^{k-1},\psi\big{)}+\tau\big{(}w^{k},\psi \big{)}+\beta\tau\big{(}\nabla w^{k},\nabla\psi\big{)}=0,\quad\forall\psi \in V_{A},\] (4.1b) \[\big{(}Bu^{k}-c_{F}u^{k}+c_{F}/2-c_{F}m(\theta^{k-1})-w^{k},\zeta -u^{k}\big{)}\geq 0,\quad\forall\zeta\in\mathcal{K}. \tag{4.1c}\]
Due to an explicit discretization of the coupling term \(m(\theta^{k-1})\), (4.1) forms a decoupled system of equations. Furthermore, introducing \(\lambda^{k}:=\lambda^{k}_{+}-\lambda^{k}_{-}\), \(\lambda^{k}_{\pm}\in M\) we can reformulate (4.1c) as a system of complementarity conditions:
\[\big{(}Bu^{k}-c_{F}u^{k}+c_{F}/2-c_{F}m(\theta^{k-1})-w^{k}+ \lambda^{k},\zeta\big{)}=0,\quad\forall\zeta\in V_{B}, \tag{4.2a}\] \[\big{(}\eta-\lambda^{k}_{+},1-u^{k}\big{)}\geq 0,\quad\big{(} \eta-\lambda^{k}_{-},u^{k}\big{)}\geq 0,\quad\forall\eta\in M. \tag{4.2b}\]
Then, this together with (4.1b) at each time step is the system of the Karush-Kuhn-Tucker (KKT) optimality conditions for the following minimization problem:
\[\min_{u\in\mathcal{K}}J_{k}(u):=\frac{\xi}{2}\|u\|^{2}-\frac{1}{2}(\gamma*u,u) +\frac{\mu}{2\tau}\big{\|}u-u^{k-1}\big{\|}_{V_{A}^{\prime}}^{2}+\frac{1}{2} \big{(}c_{F}-2c_{F}m(\theta^{k-1}),u\big{)}, \tag{4.3}\]
with \(\xi:=c_{\gamma}-c_{F}\). Thus, for \(k=1,\ldots,K\), under appropriate conditions on the time step \(\tau\) (as stated in Theorem 4.1) the problem (4.2) can be equivalently expressed as
\[\big{(}\theta^{k}-\theta^{k-1},\phi\big{)}+D\tau\big{(}\nabla\theta ^{k},\nabla\phi\big{)}-L\big{(}u^{k}-u^{k-1},\phi\big{)}=0,\quad\forall\phi \in V_{\theta} \tag{4.4}\] \[u^{k}=\operatorname*{arg\,min}_{u\in\mathcal{K}}J_{k}(u).\]
**Theorem 4.1** (Existence and uniqueness of a time-discrete solution).: _Let \(\gamma\) satisfy (2.2), then for \(\xi\geq 0\) there exists a solution of (4.4). Furthermore, for \(\beta>0\), \(\xi>0\) and \(\tau<2\xi\mu/((C_{\eta}^{2}+\beta\hat{C}_{\eta}^{2})(1+C_{I}^{2}))\) or for \(\beta=0\), \(\xi\geq 0\) and \(\tau<\mu/(C_{\gamma}(1+C_{I}^{2})-\xi)\) the solution is unique. Here, the constants \(C_{\gamma}\), \(C_{\eta}\), \(\hat{C}_{\eta}\) and \(C_{I}\) are as in (2.3), (3.6) and (3.3), respectively, and \(\eta=\xi/(2+2C_{I}^{2})\)._
Proof.: Since the system (4.4) is a decoupled system of equations we can analyze existence and uniqueness of each variable separately. For the temperature \(\theta\) existence and uniqueness follows a standard Lax-Milgram argument. Invoking a
uniform boundedness on \(m(\theta^{k-1})\) (2.12) the proof of existence of the minimizer in (4.4) follows similar lines as in [14, Theorem 3.1] and we show only the proof for the uniqueness of the corresponding minimizer. First, let \(\beta>0\) and setting \(w^{k}=-(\mu/\tau)\mathcal{G}(u^{k}-u^{k-1})\), where \(\mathcal{G}\) is defined as in (3.4), we can express (4.1b)-(4.1c) as follows:
\[\left(\xi u^{k}-\gamma*u^{k}+\frac{c_{F}}{2}-c_{m}m(\theta^{k-1})+\frac{\mu}{ \tau}\mathcal{G}(u^{k}-u^{k-1}),\zeta-u^{k}\right)\geq 0. \tag{4.5}\]
Using a contradiction argument, we assume \(u_{1}^{k}\) and \(u_{2}^{k}\) are two solutions, and \(U^{k}=u_{1}^{k}-u_{2}^{k}\). Now, taking \(\zeta=u_{2}^{k}\) for \(u_{1}^{k}\) and \(\zeta=u_{1}^{k}\) for \(u_{2}^{k}\) in the above variaitonal inequality, adding both together and using (3.5) leads to
\[\xi\big{\|}U^{k}\big{\|}^{2}-(\gamma*U^{k},U^{k})+\frac{\mu}{\tau}(\mathcal{G }(U^{k}),U^{k})=\xi\big{\|}U^{k}\big{\|}^{2}-(\gamma*U^{k},U^{k})+\frac{\mu}{ \tau}\big{\|}U^{k}\big{\|}_{V^{\prime}_{A}}^{2}\leq 0.\]
Using Young's and Cauchy inequalities we can bound the convolution by
\[\big{|}(\gamma*U^{k},U^{k})\big{|}\leq\Big{|}\big{\langle}\gamma*U ^{k},U^{k}\big{\rangle}_{V_{A}}\Big{|}\\ \leq\big{\|}\gamma_{\eta}*U^{k}\big{\|}_{V_{A}}\big{\|}U^{k} \big{\|}_{V^{\prime}_{A}}+\|\gamma-\gamma_{\eta}\|_{L^{1}(\mathbb{R}^{n})} \big{\|}U^{k}\big{\|}_{L^{2}(\Omega\cup\Omega_{I})}^{2}\\ \leq\frac{\tau}{4\mu}\big{\|}\gamma_{\eta}*U^{k}\big{\|}_{V_{A}}^ {2}+\frac{\mu}{\tau}\big{\|}U^{k}\big{\|}_{V^{\prime}_{A}}^{2}+\|\gamma-\gamma _{\eta}\|_{L^{1}(\mathbb{R}^{n})}\big{\|}U^{k}\big{\|}_{L^{2}(\Omega\cup\Omega _{I})}^{2}\\ =\left(\frac{\tau}{4\mu}\|\gamma_{\eta}\|_{L^{1}(\mathbb{R}^{n})} ^{2}+\frac{\tau\beta}{4\mu}\|\nabla\gamma_{\eta}\|_{L^{1}(\mathbb{R}^{n})}^{2} +\|\gamma-\gamma_{\eta}\|_{L^{1}(\mathbb{R}^{n})}\right)\big{\|}U^{k}\big{\|}_ {L^{2}(\Omega\cup\Omega_{I})}^{2}+\frac{\mu}{\tau}\big{\|}U^{k}\big{\|}_{V^{ \prime}_{A}}^{2}.\]
Now, combining this with the previous estimate and using (3.3) and (3.6) we obtain
\[\left(\xi-\left(1+C_{I}^{2}\right)\left(\tau C_{\eta}^{2}/4\mu+\tau\beta\hat{ C}_{\eta}^{2}/4\mu+\eta\right)\right)\big{\|}U^{k}\big{\|}^{2}\leq 0.\]
Then, choosing \(\eta=\xi/(2+2C_{I}^{2})\) and \(\tau<2\xi\mu/((C_{\eta}^{2}+\beta\hat{C}_{\eta}^{2})(1+C_{I}^{2}))\) we obtain that \(U^{k}=0\) and, hence, \(u_{1}^{k}=u_{2}^{k}\). For \(\beta=0\), \(0\leq\xi<C_{\gamma}(1+C_{I}^{2})\) and \(\tau<\mu/(C_{\gamma}(1+C_{I}^{2})-\xi)\) we obtain a uniqueness of the solution following similar arguments as above.
### Sharp interfaces
In this section we study the conditions under which the phase-field variable can attain sharp interfaces.
**Theorem 4.2** (Sharp interfaces, \(\beta>0\)).: _Let \((\theta^{k},u^{k},w^{k})\) be a solution of (4.1) and \(\gamma\) satisfies (2.2). Then for \(\xi>0\) and \(\beta>0\) at each time step the following projection formula holds true for the phase-field variable \(u^{k}\):_
\[u^{k}=P_{[0,1]}\left(\frac{1}{\xi}g^{k}\right)=\begin{cases}1& \text{if}\quad g^{k}\geq\xi,\\ 0&\text{if}\quad g^{k}\leq 0,\\ g^{k}/\xi&\text{if}\quad g^{k}\in(0,\xi),\end{cases} \tag{4.6}\] \[\text{with}\quad g^{k}:=w^{k}+\gamma*u^{k}+c_{F}m(\theta^{k-1})-c_{F }/2. \tag{4.7}\]
_Moreover, for \(\xi=0\) and \(\beta>0\) we have_
\[u^{k}\in\frac{1}{2}\left(1+\operatorname{sign}(g^{k})\right)=\begin{cases}\{1 \}&\text{if}\quad g^{k}>0,\\ \{0\}&\text{if}\quad g^{k}<0,\\ {}[0,1]&\text{if}\quad g^{k}=0,\end{cases}\]
_where \(\operatorname{sign}(0)=[0,1]\). Furthermore, if \(|\{g^{k}\}|=0\) a.e. in \(\Omega\), i.e., the set when \(g^{k}=0\) has measure zero, then \(u^{k}\) can be discontinuous and attains only \(u^{k}\in\{0,1\}\)._
Proof.: The proof follows similar strategy as in [14] and we skip it for brevity.
**Corollary 4.1** (Projection formula, \(\beta=0\)).: _For the Allen-Cahn case (\(\beta=0\)) and for \(\xi\geq 0\) a phase-field variable \(u^{k}\) also admits a representation formula:_
\[u^{k}:=P_{[0,1]}\left(\frac{1}{\mu/\tau+\xi}g^{k}\right)\quad\text{with}\quad g ^{k}:=\frac{\mu}{\tau}u^{k-1}+\gamma*u^{k}+c_{F}m(\theta^{k-1})-\frac{c_{F}}{2}. \tag{4.8}\]
_However, since \(\mu/\tau+\xi>0\), no sharp interfaces with a jump-discontinuity can occur in this case, unless the solution at a previous time step already has a jump discontinuity. Furthermore, in the steady-state the corresponding solution admits:_
\[u=P_{[0,1]}\left(\frac{1}{\xi}\left(\gamma*u-c_{F}/2+c_{F}m(\theta)\right) \right),\quad\xi>0.\]
_Then, if \(\xi=0\) and \(|\left\{\gamma*u-c_{F}/2+c_{F}m(\theta)\right\}|=0\) almost everywhere, the solution \(u\) can admit only pure phases, \(u\in\{0,1\}\)._
**Remark 4.1**.: _We note that the projection formula (4.6) or (4.8) provides a useful insight into stability and regularity properties of the solution. Furthermore, for \(\beta=0\) (the Allen-Cahn case), using explicit discretization of the convolution and coupling terms, the formula (4.8) can be used to directly evaluate the solution \(u^{k}\), provided \(\gamma*u^{k-1}\), without the need to solve the corresponding nonlocal system (see Section 6)._
**Theorem 4.3** (Higher regularity).: _Let \((\theta^{k},u^{k},w^{k})\) be the solution of (4.1) and \(\gamma\) satisfies (2.2) and \(\gamma\in W^{1,1}(\mathbb{R}^{n})\). Then if \(\xi>0\) and \(u_{0}\in H^{1}(\Omega)\) (for \(\beta=0\)), we obtain that \(u^{k}\in H^{1}(\Omega)\) for all \(k=1,\ldots,K\), \(\beta\geq 0\), and the following holds_
\[\left\|\nabla u^{k}\right\|\leq\frac{C}{\xi}\big{\|}g^{k}\big{\|}_{H^{1}( \Omega)}, \tag{4.9}\]
_where \(C=1\) for \(\beta>0\) and \(C=\tau\xi/(\mu+\tau\xi)<1\) for \(\beta=0\), and \(g^{k}\) is defined as in (4.7) and (4.8) for \(\beta>0\) and \(\beta=0\), respectively. Additionally, for \(\xi\geq 0\) and \(\beta>0\) we also have that \(\lambda_{\pm}^{k}\) in (4.2) is in \(H^{1}(\Omega)\) for \(k=1,\ldots,K\)._
Proof.: Since \(u^{k}\in L^{\infty}(\mathbb{R}^{n})\) and \(\gamma\in W^{1,1}(\mathbb{R}^{n})\) by Young's inequality we have that \(\gamma*u^{k}\in W^{1,\infty}(\mathbb{R}^{n})\). Furthermore, since \(\theta^{k-1}\in H^{1}(\Omega)\), from (2.12) it follows that
\[\left\|\nabla m(\theta^{k-1})\right\|\leq\left\|m^{\prime}(\theta^{k-1}) \right\|_{L^{\infty}(\Omega)}\left\|\nabla\theta^{k-1}\right\|\leq C,\]
and, hence, \(m(\theta^{k-1})\in H^{1}(\Omega)\). Now, let \(\beta=0\) and \(k=1\) then \(g^{1}\in H^{1}(\Omega)\) and by the stability of the \(L^{2}-\)projection in \(H^{1}\) it directly follows that
\[\left\|\nabla u^{1}\right\|\leq(1/\tau+\xi)^{-1}\big{\|}\nabla g^{1}\big{\|} \leq C,\quad C>0,\]
and hence \(u^{1}\in H^{1}(\Omega)\). Applying an induction argument it is easy to see that \(u^{k}\in H^{1}(\Omega)\). The proof for \(\beta>0\) follows similar lines. The regularity of the Lagrange multiplier \(\lambda^{k}\in H^{1}(\Omega)\) for \(\beta>0\) follows directly by invoking the regularities of \(u^{k},w^{k},\theta^{k-1}\in H^{1}(\Omega)\) and the fact that \(\lambda^{k}=-\xi u^{k}+\gamma*u^{k}-c_{F}/2+c_{F}m(\theta^{k-1})+w^{k}\).
## 5. Continuous problem
In this section, we establish the existence of the solution of problem (3.9) by using a Rothe method and studying the limit \(\tau\to 0\) in (4.1). We also discuss the conditions when sharp interfaces occur.
### Interpolants
Let \(X\) be either \(V_{B}\), \(L^{2}(\varOmega)\), \(V_{A}\) or \(V_{\theta}\) and for a given sequence of functions \(\{z^{k}\}_{k=1}^{K}\subset X\) we introduce piecewise constant and piecewise linear interpolants:
\[\begin{split}\mathbb{T}_{\tau}(t)&:=z^{k},\quad \underline{z}_{\tau}(t):=z^{k-1},\quad\quad\quad t\in(t_{k-1},t_{k}],\quad k=1, \ldots,K,\\ \hat{z}_{\tau}(t)&:=\frac{t-t_{k-1}}{\tau}z^{k}+ \frac{t_{k}-t}{\tau}z^{k-1},\quad t\in[t_{k-1},t_{k}],\quad k=1,\ldots,K,\end{split} \tag{5.1}\]
where \(\tau=T/K\). Then, \(\partial_{t}\hat{z}_{\tau}=(z^{k}-z^{k-1})/\tau\), \(t\in[t_{k-1},t_{k}]\), \(k=1,\ldots,K\) and we have
\[\begin{split}\|\bar{z}_{\tau}\|_{L^{2}(0,T;X)}^{2}& =\tau\sum_{k=1}^{K}\big{\|}z^{k}\big{\|}_{X}^{2},\quad\|\underline{z}_{\tau} \|_{L^{2}(0,T;X)}^{2}=\tau\sum_{k=0}^{K-1}\big{\|}z^{k}\big{\|}_{X}^{2},\\ \|\partial_{t}\hat{z}_{\tau}\|_{L^{2}(0,T;X)}^{2}&= \tau\sum_{k=1}^{K}\big{\|}(z^{k}-z^{k-1})/\tau\big{\|}_{X}^{2}=\frac{1}{\tau} \sum_{k=1}^{K}\big{\|}z^{k}-z^{k-1}\big{\|}_{X}^{2}.\end{split}\]
The following holds true for the above interpolants (5.1), (see Proposition 3.9 in [18]):
\[\|\hat{z}_{\tau}\|_{L^{2}(0,T;Z)}^{2}\leq\tau\|z_{0}\|_{Z}^{2}+2 \|\bar{z}_{\tau}\|_{L^{2}(0,T;Z)}^{2}, \tag{5.2}\] \[\|\bar{z}_{\tau}-\hat{z}_{\tau}\|_{L^{2}(0,T;Z)}^{2}=\frac{\tau^ {2}}{3}\|\partial_{t}\hat{z}_{\tau}\|_{L^{2}(0,T;Z)}^{2}, \tag{5.3}\]
where \(Z=H^{1}(\varOmega)\) or \(Z=L^{2}(\varOmega)\). We also make use of the following property
\[\|\underline{z}_{\tau}-\hat{z}_{\tau}\|_{L^{2}(Q)}^{2}\leq\tau\| \partial_{t}\hat{z}_{\tau}\|_{L^{2}(0,T;(H^{1}(\varOmega))^{\prime})}^{2}+ \tau\|\bar{z}_{\tau}\|_{L^{2}(0,T;H^{1}(\varOmega))}^{2}+\tau^{2}\big{\|}z^{0} \big{\|}_{H^{1}(\varOmega)}^{2}, \tag{5.4}\]
which follows directly by realizing that
\[\begin{split}&\|\underline{z}_{\tau}-\hat{z}_{\tau}\|_{L^{2}(Q)}^{2} =\sum_{k=1}^{K}\int_{t_{k-1}}^{t_{k}}\frac{(t-t_{k-1})^{2}}{\tau^{2}}\big{\|}z^ {k-1}-z^{k}\big{\|}^{2}\,\mathrm{d}t=\frac{\tau}{3}\sum_{k=1}^{K}\big{\|}z^{k-1 }-z^{k}\big{\|}^{2}\\ &=\frac{\tau^{2}}{3}\sum_{k=1}^{K}\bigg{\langle}\frac{z^{k}-z^{k- 1}}{\tau},z^{k}-z^{k-1}\bigg{\rangle}_{H^{1}(\varOmega)}\leq\frac{\tau^{2}}{3} \sum_{k=1}^{K}\|\partial_{t}\hat{z}_{\tau}\|_{(H^{1}(\varOmega))^{\prime}}\|z^ {k}-z^{k-1}\big{\|}_{H^{1}(\varOmega)}\\ &\qquad\qquad\qquad\leq\tau\|\partial_{t}\hat{z}_{\tau}\|_{L^{2}( 0,T;(H^{1}(\varOmega))^{\prime})}^{2}+\tau\|\bar{z}_{\tau}\|_{L^{2}(0,T;H^{1} (\varOmega))}^{2}+\tau^{2}\big{\|}z^{0}\big{\|}_{H^{1}(\varOmega)}^{2}.\end{split}\]
### Existence of a solution
Now, we can rewrite the system(4.1a)-(4.1b), (4.2) in terms of the interpolants as follows
\[(\partial_{t}\hat{\theta}_{\tau}(t),\phi)+D\big{(}\nabla\overline{ \theta}_{\tau}(t),\nabla\phi\big{)}-L(\partial_{t}\hat{u}_{\tau}(t),\phi) =0,\quad\forall\phi\in V_{\theta} \tag{5.5a}\] \[\mu(\partial_{t}\hat{u}_{\tau}(t),\psi)+(\overline{w}_{\tau}(t), \psi)+\beta(\nabla\overline{w}_{\tau}(t),\nabla\psi) =0,\quad\forall\psi\in V_{A},\] (5.5b) \[\big{(}B\overline{u}_{\tau}(t)-c_{F}\overline{u}_{\tau}(t)+c_{F}/2 -c_{F}m(\underline{\theta}_{\tau}(t))-\overline{w}_{\tau}(t)+\overline{ \lambda}_{\tau}(t),\zeta\big{)} =0,\quad\forall\zeta\in V_{B},\] (5.5c) \[\big{(}\eta-\overline{\lambda}_{+,\tau}(t),1-\overline{u}_{\tau}(t )\big{)}\geq 0,\quad\big{(}\eta-\overline{\lambda}_{-,\tau}(t),\overline{u}_{ \tau}(t)\big{)} \geq 0,\quad\forall\eta\in M. \tag{5.5d}\]
By analyzing the system above for \(\tau\to 0\) we derive the following existence result.
**Theorem 5.1**.: _Let \(\gamma\in W^{1,1}(\mathbb{R}^{n})\) and satisfies (2.2), and let \(\theta^{0}\in V_{\theta}\), \(u^{0}\in\mathcal{K}\). Then for \(\xi\geq 0\), \(0\leq\beta\leq\min\left\{\frac{\mu D}{2c_{F}C_{m}L},\frac{\mu^{2}D^{2}}{4c_{F}^{2} C_{m}^{2}L^{2}}\right\}\) there exist a solution quadruple
\((\theta,u,w,\lambda)\) solving (3.12) and fulfilling the regularity_
\[\theta\in L^{2}(0,T;V_{\theta}),\quad u\in L^{\infty}(\hat{Q}),\quad w \in L^{2}(0,T;V_{A}),\quad\lambda\in L^{2}(0,T;V_{A}),\\ \text{and}\qquad\partial_{t}\theta\in L^{2}(0,T;V_{\theta}^{ \prime}),\quad\partial_{t}u\in L^{2}(0,T;V_{A}^{\prime}),\]
_where we recall \(L^{2}(0,T;V_{A})\cong L^{2}(0,T;H^{1}(\Omega))\) for \(\beta>0\). Additionally, if i) \(\xi>0\), \(\beta>0\) or ii) \(\xi\geq 0\), \(\beta=0\), and \(u^{0}\in H^{1}(\Omega)\), then \(u\) admits an improved regularity_
\[u\in L^{\infty}(0,T;H^{1}(\Omega))\cap L^{\infty}(\hat{Q}). \tag{5.6}\]
Proof.: For compactness of the presentation in some cases we will use a generic constant \(C>0\) in the estimates, which is independent of \(\tau\). The inclusion \(u\in L^{\infty}(\hat{Q})\) follows immediately from the fact that \(0\leq\overline{u}_{\tau}(t)\leq 1\) for a.e. \(t\in(0,T)\).
* _Estimates for the time derivative of the phase-field variable_ \(\partial_{t}\hat{u}_{\tau}\)_._
First, we consider the following estimate for the discrete time-derivative,
\[\frac{\mu}{\tau}\big{\|}u^{k}-u^{k-1}\big{\|}_{V_{A}^{\prime}}^{2 }\leq\xi\big{\|}u^{k-1}\big{\|}^{2}-\xi\big{\|}u^{k}\big{\|}^{2}+\big{(}\gamma \ast u^{k},u^{k}\big{)}-\big{(}\gamma\ast u^{k-1},u^{k-1}\big{)}\\ +\big{(}c_{F}-2c_{F}m(\theta^{k-1}),u^{k-1}-u^{k}\big{)},\]
which is obtained from the fact that \(u^{k}\) is a solution of (4.3) and \(J_{k}(u^{k})\leq J_{k}(u^{k-1})\), \(k=1,\ldots,K\). From the above expression using Young's inequality we further obtain
\[\mu\big{\|}\partial_{t}\hat{u}_{\tau}\big{\|}_{L^{2}(0,T;V_{A}^{ \prime})}^{2}=\frac{\mu}{\tau}\sum_{k=1}^{K}\big{\|}u^{k}-u^{k-1}\big{\|}_{V_ {A}^{\prime}}^{2}\leq\xi\big{\|}u^{0}\big{\|}^{2}-\xi\big{\|}u^{K}\big{\|}^{2} \\ +\big{\|}u^{0}\big{\|}\big{\|}\gamma\ast u^{0}\big{\|}+\big{\|}u^ {K}\big{\|}\big{\|}\gamma\ast u^{K}\big{\|}+\sum_{k=1}^{K}\Big{|}\big{\langle} c_{F}-2c_{F}m(\theta^{k-1}),u^{k-1}-u^{k}\big{\rangle}_{V_{A}}\Big{|}\\ \leq(\xi+C_{\gamma})\big{\|}u^{0}\big{\|}_{L^{2}(\Omega\cup\Omega _{I})}^{2}+C_{\gamma}\big{\|}u^{K}\big{\|}_{L^{2}(\Omega\cup\Omega_{I})}^{2}\\ +\frac{\mu}{2}\big{\|}\partial_{t}\hat{u}_{\tau}\big{\|}_{L^{2}( 0,T;V_{A}^{\prime})}^{2}+\frac{c_{F}^{2}}{2\mu}\big{\|}1-2m(\underline{ \theta}_{\tau})\big{\|}_{L^{2}(0,T;V_{A})}^{2}, \tag{5.7}\]
where \(C_{\gamma}\) as in (2.3). Next, we estimate the last term in (5.7):
\[\|1-2m(\underline{\theta}_{\tau})\|_{L^{2}(0,T;V_{A})}^{2}=\|1-2m (\underline{\theta}_{\tau}(t))\|_{L^{2}(Q)}^{2}+4\beta\|\nabla m(\underline{ \theta}_{\tau})\|_{L^{2}(Q)}^{2}\\ \leq C+4\beta C_{m}^{2}\left(\tau\big{\|}\nabla\theta^{0}\big{\|} ^{2}+\big{\|}\nabla\overline{\theta}_{\tau}\big{\|}_{L^{2}(Q)}^{2}\right), \tag{5.8}\]
where in the last estimate we have used (2.12) and the relationship
\[\|\nabla\underline{\theta}_{\tau}\|_{L^{2}(Q)}^{2}\leq\tau\sum_{k=1}^{K}\big{\|} \nabla\theta^{k}\big{\|}^{2}+\tau\big{\|}\nabla\theta^{0}\big{\|}^{2}=\big{\|} \nabla\overline{\theta}_{\tau}\big{\|}_{L^{2}(Q)}^{2}+\tau\big{\|}\nabla \theta^{0}\big{\|}^{2}. \tag{5.9}\]
To bound the last term in (5.8) we introduce the orthogonal projection \(P:L^{2}(\Omega)\to L^{2}(\Omega)\), \(P(v):=v-v^{\prime\Omega}\), where \(v^{\prime\Omega}:=\frac{1}{|l|^{2}}\int_{\Omega}v\,\mathrm{d}x\). We also have stability estimates:
\[\|P(v)\|_{\mathcal{X}}\leq\|v\|_{\mathcal{X}},\quad\forall v\in\mathcal{X}, \tag{5.10}\]
where \(\mathcal{X}\) is either \(L^{2}\), \(V_{A}\) or \(V_{A}^{\prime}\). For \(\mathcal{X}\in\{L^{2},V_{A}\}\) the property follows directly, while for \(V_{A}^{\prime}\) this is obtained by a duality argument. Then, by the linearity of the projection operator and the fact that \(\nabla P(\theta^{k})=\nabla\theta^{k}\), from (4.1) we obtain the following
\[\big{(}P(\theta^{k})-P(\theta^{k-1}),\phi\big{)}+D\tau\big{(}\nabla\theta^{k}, \nabla\phi\big{)}-L\big{(}P(u^{k}-u^{k-1}),\phi\big{)}=0,\quad\forall\phi\in V _{\theta}.\]
Taking \(\phi=2\tau P(\theta^{k})\) and using the relation \(2a(a-b)=a^{2}+(a-b)^{2}-b^{2}\) leads to
\[\big{\|}P(\theta^{k})\big{\|}^{2}+2\tau D\big{\|}\nabla\theta^{k} \big{\|}^{2}=\big{\|}P(\theta^{k-1})\big{\|}^{2}-\big{\|}P(\theta^{k})-P(\theta ^{k-1})\big{\|}^{2}\\ +2L\big{(}P(u^{k}-u^{k-1}),P(\theta^{k})\big{)}\leq\big{\|}P( \theta^{k-1})\big{\|}^{2}+2L\big{\|}P(u^{k}-u^{k-1})\big{\|}_{V^{\prime}_{A}} \big{\|}P(\theta^{k})\big{\|}_{V_{A}}\\ \leq\big{\|}P(\theta^{k-1})\big{\|}^{2}+\frac{L}{q}\big{\|}P(u^{k }-u^{k-1})\big{\|}_{V^{\prime}_{A}}^{2}+Lq\left(\beta\big{\|}\nabla P(\theta^{ k})\big{\|}^{2}+\big{\|}P(\theta^{k})\big{\|}^{2}\right)\\ \leq\big{\|}P(\theta^{k-1})\big{\|}^{2}+\frac{L}{q}\big{\|}u^{k} -u^{k-1}\big{\|}_{V^{\prime}_{A}}^{2}+Lq(\beta+C_{P}^{2})\big{\|}\nabla\theta^ {k}\big{\|}^{2},\]
where in the last two estimates we have used Young's inequality with the constant \(q>0\), (5.10), and (3.1). Now, taking a summation from \(k=1\ldots K\) we obtain
\[(2\tau D-Lq(\beta+C_{P}^{2}))\sum_{k=1}^{K}\big{\|}\nabla\theta^{k}\big{\|}^{ 2}\leq\big{\|}\theta^{0}\big{\|}^{2}+\frac{L}{q}\sum_{k=1}^{K}\big{\|}u^{k}-u^ {k-1}\big{\|}_{V^{\prime}_{A}}^{2},\]
and now choosing \(q=\tau D/(L(\beta+C_{P}^{2}))\) leads to
\[\big{\|}\nabla\overline{\theta}_{\tau}\big{\|}_{L^{2}(Q)}^{2}\leq D^{-1}\big{\|} \theta^{0}\big{\|}^{2}+D^{-2}L^{2}(\beta+C_{P}^{2})\|\partial_{t}\hat{u}_{ \tau}\|_{L^{2}(0,T;V^{\prime}_{A})}^{2}. \tag{5.11}\]
Using the above estimate and (5.8) in (5.7) we obtain
\[\frac{\mu}{2}\|\partial_{t}\hat{u}_{\tau}\|_{L^{2}(0,T;V^{\prime} _{A})}^{2}\leq C\left(1+\big{\|}u^{0}\big{\|}^{2}+\big{\|}u^{K}\big{\|}^{2}+ \big{\|}\theta^{0}\big{\|}^{2}+\tau\big{\|}\nabla\theta^{0}\big{\|}^{2}\right) \\ +\frac{2\beta C_{m}^{2}c_{F}^{2}L^{2}(\beta+C_{P}^{2})}{\mu D^{2} }\|\partial_{t}\hat{u}_{\tau}\|_{L^{2}(0,T;V^{\prime}_{A})}^{2}. \tag{5.12}\]
For \(\beta(\beta+C_{P}^{2})<\mu^{2}D^{2}/(2c_{F}^{2}C_{m}^{2}L^{2})\), or alternatively for \(\beta\leq\min\left\{\frac{\mu D}{2c_{F}C_{m}L},\frac{\mu^{2}D^{2}}{4c_{F}^{2}C_ {m}^{2}L^{2}}\right\}\)
\[\left(\frac{\mu}{2}-\frac{2\beta C_{m}^{2}c_{F}^{2}L^{2}(\beta+C_{P}^{2})}{\mu D ^{2}}\right)\|\partial_{t}\hat{u}_{\tau}\|_{L^{2}(0,T;V^{\prime}_{A})}^{2}\leq C,\]
where \(C\) depends on \(u^{0}\), \(u^{K}\), \(\theta^{0}\) and, hence, we have \(\partial_{t}\hat{u}_{\tau}\in L^{2}(0,T;V^{\prime}_{A})\).
* _Estimates for the temperature_ \(\overline{\theta}_{\tau}\) _and_ \(\partial_{t}\overline{\theta}_{\tau}\)_._
Using the regularity \(\partial_{t}\hat{u}_{\tau}\in L^{2}(0,T;V^{\prime}_{A})\), from (5.11) it immediately follows that
\[\big{\|}\nabla\overline{\theta}_{\tau}\big{\|}_{L^{2}(Q)}^{2}\leq C\left(\big{\|} \theta^{0}\big{\|}^{2}+\|\partial_{t}\hat{u}_{\tau}\|_{L^{2}(0,T;V^{\prime}_{A} )}^{2}\right)\leq C, \tag{5.13}\]
and, hence, \(\overline{\theta}_{\tau}\in L^{2}(0,T;V_{\theta})\), and from (5.9) it is also \(\underline{\theta}_{\tau}\in L^{2}(0,T;V_{\theta})\). Then from (2.12) we have \(m(\overline{\theta}_{\tau}),m(\underline{\theta}_{\tau})\in L^{2}(0,T;V_{ \theta})\cap L^{\infty}(\tilde{Q})\). Next, from (5.5) we obtain
\[\Big{|}\Big{(}\partial_{t}\hat{\theta}_{\tau}(t),\phi\Big{)}\Big{|} \leq D\big{|}\big{(}\nabla\overline{\theta}_{\tau}(t),\nabla\phi\big{)} \big{|}+L\big{|}\langle\partial_{t}\hat{u}_{\tau}(t),\phi\rangle_{V_{A}}\big{|}\\ \leq\Big{(}D\big{\|}\nabla\overline{\theta}_{\tau}(t)\big{\|}+L \max\{1,\beta\}\|\partial_{t}\hat{u}_{\tau}(t)\|_{V^{\prime}_{A}}\Big{)}\,\| \phi\|_{V_{\theta}},\]
where we have used repeatedly Cauchy inequality and \(\|\phi\|_{V_{A}}\leq\max\{1,\beta\}\|\phi\|_{V_{\theta}}\). Now, dividing by \(\|\phi\|_{V_{\theta}}\neq 0\), taking a supremum over \(\|\phi\|_{V_{\theta}}\neq 0\), and then taking the square of the above estimate and integrating over \((0,T)\) leads to
\[\|\partial_{t}\hat{\theta}_{\tau}\|_{L^{2}(0,T;V^{\prime}_{\theta})}^{2}\leq C \left(\big{\|}\overline{\theta}_{\tau}\big{\|}_{L^{2}(0,T;V_{\theta})}^{2}+\| \partial_{t}\hat{u}_{\tau}\|_{L^{2}(0,T;V^{\prime}_{A})}^{2}\right)\leq C \tag{5.14}\]
and, hence, \(\partial_{t}\hat{\theta}_{\tau}\in L^{2}(0,T;V^{\prime}_{\theta})\).
* _Estimates for the chemical potential_ \(\overline{w}_{\tau}\)_._
Using the fact that \(\overline{w}_{\tau}(t)=-\mu\mathcal{G}(\partial_{t}\hat{u}_{\tau}(t))\), where \(\mathcal{G}:V_{A}^{\prime}\to V_{A}\) is defined as in (3.4), together with (3.5) we obtain
\[\left\|\overline{w}_{\tau}\right\|_{L^{2}(0,T;V_{A})}^{2}=\mu^{2}\|\mathcal{G} (\partial_{t}\hat{u}_{\tau})\|_{L^{2}(0,T;V_{A})}^{2}=\mu^{2}\|\partial_{t} \hat{u}_{\tau}\|_{L^{2}(0,T;V_{A}^{\prime})}^{2}\leq C,\]
and hence \(\overline{w}_{\tau}\in L^{2}(0,T;V_{A})\cong L^{2}(0,T;H^{1}(\Omega))\) (for \(\beta>0\)).
* _Estimates for the Lagrange multiplier_\(\overline{\lambda}_{\tau}\)_._
Consider (5.5c) and recall that \(B\overline{u}_{\tau}-c_{F}\overline{u}_{\tau}=\xi\overline{u}_{\tau}-\gamma \ast\overline{u}_{\tau}\), then
\[\overline{\lambda}_{\tau}(t)=-\xi\overline{u}_{\tau}(t)+\gamma\ast\overline{ u}_{\tau}(t)-c_{F}/2+c_{F}m(\underline{\theta}_{\tau}(t))+\overline{w}_{\tau}(t). \tag{5.15}\]
Using Youngs inequality, (2.12) and the fact that \(u\in L^{\infty}(\hat{Q})\) we can estimate
\[\left\|\overline{\lambda}_{\tau}(t)\right\|\leq\left\|\gamma\ast\overline{u}_ {\tau}(t)\right\|+\|\xi\overline{u}_{\tau}(t)\|+\|c_{F}m(\underline{\theta}_{ \tau}(t))-c_{F}/2\|+\|\overline{w}_{\tau}(t)\|\leq C+\|\overline{w}_{\tau}(t) \|_{V_{A}}.\]
Taking a square and integrating over \((0,T)\) leads to \(\|\overline{\lambda}_{\tau}\|_{L^{2}(Q)}^{2}\leq C\), hence \(\lambda\in L^{2}(Q)\).
* _Higher regularity estimates for the phase-field variable_\(\overline{u}_{\tau}\)_._
We show that for \(\beta>0\), \(\xi>0\) or \(\beta=0\), \(\xi\geq 0\), \(\overline{u}_{\tau}\in L^{2}(0,T;H^{1}(\Omega))\). For \(\beta>0\), \(\xi>0\) the proof is similar as in [14] and we consider the case \(\beta=0\), \(\xi\geq 0\). Then, from (4.8) and (4.9) it follows that for a.e. \(t\in(0,1)\) the following holds
\[\overline{u}_{\tau}(t)=P_{[0,1]}\left(\frac{1}{\mu/\tau+\xi}\overline{g}_{ \tau}(t)\right)\quad\text{and}\quad\|\nabla\overline{u}_{\tau}(t)\|\leq\frac{ 1}{\mu/\tau+\xi}\|\nabla\overline{g}_{\tau}(t)\|,\]
where \(\overline{g}_{\tau}:=(\mu/\tau)\underline{u}_{\tau}+\gamma\ast\overline{u}_{ \tau}+c_{F}m(\underline{\theta}_{\tau})-c_{F}/2\). Then, using Young's inequality, (2.12) and \(\underline{u}_{\tau}(t)\in L^{\infty}(\hat{Q})\) we estimate
\[\|\nabla\overline{u}_{\tau}(t)\|\leq\frac{1}{\mu/\tau+\xi}\left( \frac{\mu}{\tau}\|\nabla\underline{u}_{\tau}(t)\|+\|\nabla(\gamma\ast \overline{u}_{\tau}(t))\|+\|\nabla\left(c_{F}m(\underline{\theta}_{\tau}(t) )-c_{F}/2\right)\|\right)\] \[\leq\frac{1}{\mu/\tau+\xi}\left(\frac{\mu}{\tau}\|\nabla \underline{u}_{\tau}(t)\|+\|\nabla\gamma\|_{L^{1}(\mathbb{R}^{n})}\| \overline{u}_{\tau}(t))\|_{L^{2}(\Omega\cup\Omega_{I})}+\|c_{F}m^{\prime}( \underline{\theta}_{\tau}(t))\|_{L^{\infty}(\Omega)}\|\nabla\underline{\theta} _{\tau}(t)\|\right)\] \[\leq\frac{1}{1+\xi\tau/\mu}\|\nabla\underline{u}_{\tau}(t)\|+ \frac{C}{\mu/\tau+\xi}\left(1+\|\nabla\underline{\theta}_{\tau}(t)\|\right) \leq\|\nabla\underline{u}_{\tau}(t)\|+C\tau\left(1+\|\nabla\underline{\theta} _{\tau}(t)\|\right),\]
where in the last estimate we have used that \(1/(1+\xi\tau/\mu)\leq 1\) and \(C/(\mu/\tau+\xi)\leq C\tau\), and \(C>0\) is independent of \(\tau\). From the above estimate for \(k=1\) we deduce that
\[\left\|\nabla u^{1}\right\|\leq\left\|\nabla u^{0}\right\|+C\tau(1+\left\| \nabla\theta^{0}\right\|)\leq C+C\tau(1+\left\|\nabla\theta^{0}\right\|),\]
and by an induction argument we obtain that for \(k=1,\ldots,K\) it holds
\[\left\|\nabla u^{k}\right\|\leq\left\|\nabla u^{k-1}\right\|+C\tau\left(1+\left\| \nabla\theta^{k-1}\right\|\right)\leq C+C\tau\sum_{i=1}^{k}(1+\left\|\nabla \theta^{i}\right\|).\]
Taking the supremum over \(k\) in the above estimate leads to
\[\operatorname*{ess\,sup}_{t\in(0,T)}\|\nabla\overline{u}_{\tau}(t )\|=\sup_{k}\left\|\nabla u^{k}\right\|\leq C+C\sup_{k}\tau\sum_{i=1}^{k} \left(1+\left\|\nabla\theta^{i}\right\|\right)\\ =C+C\tau K+\tau\sum_{i=1}^{K}\left\|\nabla\theta^{i}\right\|\leq C \left(1+\left\|\overline{\theta}_{\tau}\right\|_{L^{1}(0,T;H^{1}(\Omega))} \right),\]
which implies that \(\overline{u}_{\tau}\in L^{\infty}(0,T;H^{1}(\Omega))\) for \(\beta\geq 0\) and \(\xi\geq 0\).
* _Higher regularity estimates for the Lagrange multiplier_\(\overline{\lambda}_{\tau}\)_._
For \(\beta>0\) and \(\xi=0\), in general, we can not expect higher regularity of the phase-field variable \(\overline{u}_{\tau}\). However, we can demonstrate that the corresponding Lagrange multiplier \(\overline{\lambda}_{\tau}(t)\in V_{A}\) for \(\xi\geq 0\) and \(\beta>0\). Indeed, let \(\xi\geq 0\) then from (5.5c), (5.15), using the regularity of \(\overline{w}_{\tau}\in L^{2}(0,T;V_{A})\cong L^{2}(0,T;H^{1}(\varOmega))\), \(\overline{u}_{\tau}\in L^{2}(0,T;H^{1}(\varOmega))\) (for \(\xi>0\)), \(m(\overline{\theta}_{\tau})\in L^{2}(0,T;H^{1}(\varOmega))\) and the fact that
\[\left\|\gamma*u\right\|_{L^{2}(0,T;V_{A})}^{2}\leq\left(\beta\|\nabla\gamma \right\|_{L^{1}(\mathbb{R}^{n})}^{2}+\left\|\gamma\right\|_{L^{1}(\mathbb{R}^{ n})}^{2}\right)\left\|\overline{u}_{\tau}\right\|_{L^{\infty}(\dot{Q})}^{2}\leq C,\]
it follows that \(\overline{\lambda}_{\tau}\in L^{2}(0,T;H^{1}(\varOmega))\). For \(\beta=0\) we have \(\overline{\lambda}_{\tau}\in L^{2}(Q)\) and, in general, no higher regularity can be expected, since in (5.15) \(\overline{w}_{\tau}=-\partial_{t}\hat{u}_{\tau}\in L^{2}(Q)\).
Summarizing the above we derive the following energy estimate for \(\xi\geq 0\), \(\beta\geq 0\):
\[\left\|\partial_{t}\hat{u}_{\tau}\right\|_{L^{2}(0,T;V_{A}^{ \prime})}^{2}+\left\|\partial_{t}\hat{\theta}_{\tau}\right\|_{L^{2}(0,T;V_{ \theta}^{\prime})}^{2}+\left\|\overline{u}_{\tau}\right\|_{L^{\infty}(\dot{Q} )}^{2}+\left\|\overline{\theta}_{\tau}\right\|_{L^{2}(0,T;V_{\theta})}^{2}\\ +\left\|\overline{w}_{\tau}\right\|_{L^{2}(0,T;V_{A})}^{2}+\left\| \overline{\lambda}_{\tau}\right\|_{L^{2}(Q)}^{2}\leq C, \tag{5.16}\]
where the underlying constant depends on the initial data \(u^{0}\) and \(\theta^{0}\).
* _Passing to the limit \(\tau\to 0\)._
From the energy estimate (5.16) using Banach-Alaoglu theorem we can extract weakly convergent subsequences. That is, there exist functions \(u\), \(\theta\), \(\lambda\) and \(w\) (\(\beta>0\)) such that as \(\tau\to 0\) (equivalently \(K\to\infty\)) the following holds
\[\partial_{t}\hat{u}_{\tau}\rightharpoonup\partial_{t}u\] weakly in
\[L^{2}(0,T;V_{A}^{\prime})\]
, \[\partial_{t}\hat{\theta}_{\tau}\rightharpoonup\partial_{t}\theta\] weakly in
\[L^{2}(0,T;V_{\theta})\]
, \[\overline{\theta}_{\tau}\rightharpoonup\theta\] weakly in
\[L^{2}(0,T;V_{\theta})\]
, \[\overline{u}_{\tau}\rightharpoonup u\] weakly-* in
\[L^{\infty}(0,T;H^{1}(\varOmega))\]
, \[\overline{u}_{\tau}\rightharpoonup u\] weakly in
\[L^{\infty}(Q)\]
, \[\overline{\lambda}_{\tau}\rightharpoonup\lambda\] weakly in
\[L^{2}(0,T,H^{1}(\varOmega))\]
, \[\overline{\lambda}_{\tau}\rightharpoonup\lambda\] weakly in
\[L^{2}(Q)\]
, \[\overline{w}_{\tau}\rightharpoonup w\] weakly in
\[L^{2}(0,T;V_{A})\]
,
where by a slight abuse of notation we drop the subsequent index. To show that the limiting functions fulfill the variational formulation (3.9), we pass to the limit when \(\tau\to 0\) (or \(K\to\infty\)) in the time-integrated variant of the variational system (5.5):
\[\int_{0}^{T}\left(\partial_{t}\left(\hat{\theta}_{\tau}-L\hat{u} _{\tau}\right),\phi\right)\mathrm{d}t+\int_{0}^{T}D\big{(}\nabla\overline{ \theta}_{\tau},\nabla\phi\big{)}\,\mathrm{d}t=0,\quad\forall\phi\in L^{2}(0,T; V_{\theta})\\ \int_{0}^{T}\left(\mu\partial_{t}\hat{u}_{\tau}+\overline{w}_{ \tau},\psi\right)\mathrm{d}t+\int_{0}^{T}\beta(\nabla\overline{w}_{\tau}, \nabla\psi)\,\mathrm{d}t=0,\quad\forall\psi\in L^{2}(0,T;V_{A}),\\ \big{(}\xi\overline{u}_{\tau}-\gamma*\overline{u}_{\tau}+c_{F}/2-c_ {F}m(\underline{\theta}_{\tau})-\overline{w}_{\tau}+\overline{\lambda}_{\tau}, \zeta\big{)}_{L^{2}(Q)}=0,\quad\forall\zeta\in L^{2}(0,T;V_{B}),\\ \big{(}\eta-\overline{\lambda}_{+,\tau},1-\overline{u}_{\tau} \big{)}_{L^{2}(Q)}\geq 0,\quad\big{(}\eta-\overline{\lambda}_{-,\tau},\overline{u}_{ \tau}\big{)}_{L^{2}(Q)}\geq 0,\quad\forall\eta\in L^{2}(0,T;M). \tag{5.17}\]
Passing to the limit in the first two equations, due to linearity, leads us to the fact that the limiting functions \(\theta\) and \(u\) satisfy the first two equations in (3.12). Due to nonlinearity induced by \(m(\theta)\) and inequality constraints, respectively, we need to achieve strong convergences in the corresponding sequences before passing to the limit in the remaining last two equations in (5.17).
Since \(\hat{\theta}_{\tau}\) is uniformly bounded in \(L^{2}(0,T;H^{1}(\varOmega))\cap W^{1,2}(0,T;(H^{1}(\varOmega))^{\prime})\), where for \(p\in[1,\infty)\), \(W^{1,p}(0,T;X)=\{u\in L^{p}(0,T;X)\colon\partial_{t}u\in L^{p}(0,T;X)\}\), which follows
from (5.13), (5.2), and (5.14), by the Aubin-Lion's lemma we obtain that \(\hat{\theta}_{\tau}\to\theta\) strongly in \(L^{2}(Q)\). Then, using (5.4) it follows
\[\left\|\underline{\theta}_{\tau}-\theta\right\|_{L^{2}(Q)}^{2}\leq \left\|\underline{\theta}_{\tau}-\hat{\theta}_{\tau}\right\|_{L^{2}(Q)}^{2}+ \left\|\hat{\theta}_{\tau}-\theta\right\|_{L^{2}(Q)}^{2}\] \[\leq\tau\left\|\partial_{t}\hat{\theta}_{\tau}\right\|_{L^{2}(0, T;V_{\theta}^{\prime})}^{2}+\tau\left\|\overline{\theta}_{\tau}\right\|_{L^{2}(0,T;V_{ \theta})}^{2}+\tau^{2}\left\|\theta^{0}\right\|_{V_{\theta}}^{2}+\left\|\hat{ \theta}_{\tau}-\theta\right\|_{L^{2}(Q)}^{2},\]
and passing to the limit for \(\tau\to 0\) it follows that \(\underline{\theta}_{\tau}\to\theta\) strongly in \(L^{2}(Q)\). Now, using a uniform Lipschitz continuity of \(m(\underline{\theta}_{\tau})\) (2.12) we deduce that \(m(\underline{\theta}_{\tau})\to m(\theta)\). Then, passing to the limit \(\tau\to 0\) in the third equation in (5.17) we obtain that the limiting functions \(u\), \(\theta\), \(w\), and \(\lambda\) satisfy the corresponding equation in (3.12).
Analogously, to pass to the limit in the inequality constraints we upgrade a weak convergence to strong in \(\overline{u}_{\tau}\). We consider the case \(\beta=0\) and \(\xi\geq 0\), since the remaining case \(\beta>0\), follows similar steps as in [14]. First, we note that \(\hat{u}_{\tau}\) is uniformly bounded in \(L^{2}(0,T;H^{1}(\varOmega))\), which directly follows the bound on \(\overline{u}_{\tau}\) in \(L^{2}(0,T;H^{1}(\varOmega))\) and (5.2). Then, invoking the Aubin-Lion's lemma we obtain that \(\hat{u}_{\tau}\to u\) strongly in \(L^{2}(Q)\), and using (5.3) we have that
\[\left\|\overline{u}_{\tau}-u\right\|_{L^{2}(Q)}\leq\left\|\overline{u}_{\tau }-\hat{u}_{\tau}\right\|_{L^{2}(Q)}+\left\|\hat{u}_{\tau}-u\right\|_{L^{2}(Q) }\leq\frac{\tau}{3}\|\partial_{t}\hat{u}_{\tau}\|_{L^{2}(Q)}+\left\|\hat{u}_{ \tau}-u\right\|_{L^{2}(Q)}.\]
Then, passing to the limit when \(\tau\to 0\) we obtain that \(\overline{u}_{\tau}\to u\) strongly in \(L^{2}(Q)\). Invoking the properties (3.11) and the fact that the inner product of a strong and weak convergences converges we pass to the limit in (5.17)
\[0\leq(\eta,1-\overline{u}_{\tau})_{L^{2}(Q)}\to(\eta,1-u)_{L^{2}(Q)}\ \text{ and }\ \left(\overline{\lambda}_{+,\tau},1-\overline{u}_{\tau}\right)_{L^{2}(Q)}\to \left(\lambda_{+},1-u\right)_{L^{2}(Q)}.\]
Proceeding similarly for \(\overline{\lambda}_{-,\tau}\) we obtain that the limiting pair \((u,\lambda_{\pm})\) satisfy the complementarity conditions in (3.12) and this concludes the proof.
### Sharp interfaces
In the next theorem we extend the projection formula (4.6) to the time continuous case and provide conditions when sharp interfaces occur.
**Theorem 5.2** (Sharp interfaces).: _Let \((\theta,u,w,\lambda)\) be a solution of (3.12) and \(\gamma\in W^{1,1}(\mathbb{R}^{n})\) satisfies (2.2), then the following holds a.e. in \(\varOmega\times(0,T)\):_
\[u(t)=P_{[0,1]}\left(\xi^{-1}g(t)\right),\quad\text{where }g:=w+\gamma*u+c_{F}m( \theta)-c_{F}/2.\]
_Furthermore, if \(\xi>0\), \(\beta>0\) and \(|\{g(t)\}|=0\) a.e. \(t\in(0,T)\), then \(u(t)\) is discontinuous and attains only pure phases \(u(t)\in\{0,1\}\) a.e. in \(\varOmega\times(0,T)\)._
## 6. Discretization and numerical examples
We discuss discretization of (4.2) using piece-wise linear finite elements and present several numerical examples.
### Discretization
Let \(\{\mathcal{T}_{h}\}_{h}\) be a shape-regular triangulation of \(\varOmega\cup\varOmega_{I}\) with \(h\) denoting a maximum diameter of the elements \(\mathcal{T}\in\{\mathcal{T}_{h}\}_{h}\), We denote \(\mathcal{J}_{h}^{k}\), \(k\in\{i,c,ic\}\) a set of nodes corresponding to the triangulation of \(\overline{\varOmega}\)\((k=i)\), \(\varOmega\cup\varOmega_{I}\setminus\overline{\varOmega}\)\((k=c)\) and \(\overline{\varOmega\cup\varOmega_{I}}\)\((k=ic)\). Associated with \(\{\mathcal{T}_{h}\}_{h}\) we define piece-wise linear finite element spaces: \(S_{\omega}^{h}=\{v_{h}\in C^{0}(\overline{\omega})\colon v_{h}|_{\mathcal{T}} \in P_{1}(\mathcal{T}),\ \forall\mathcal{T}\in\mathcal{T}_{h}\}\), with \(\omega\in\{\varOmega,\varOmega\cup\varOmega_{I}\}\), see [14] for more details. As detailed there we employ a trapezoidal quadrature rule that provides mass-lumping property for the local and nonlocal terms:
\[(B\phi,\psi_{j})=b_{h}(\phi,\psi_{j}):=(c_{\gamma}^{h}\phi,\psi_{j})_{h}-( \gamma\,\,\raise 1.0pt\hbox{\hbox to 0.0pt{$\sim$}\raise 1.0pt\hbox{$\sim$}}\,\phi,\psi_{j})_{h}+( \mathcal{N}_{h}\phi,\psi_{j})_{\bar{h}},\quad\forall\phi,\psi_{j}\in S_{ \varOmega\cup\varOmega_{I}}^{h},\]
where the last term vanishes for \(j\in\mathcal{J}_{h}^{i}\) and \(b_{h}(\phi,\psi_{j})=(\mathcal{N}_{h}\phi,\psi_{j})_{\bar{h}}\) for \(j\in\mathcal{J}_{h}^{c}\), where \(\mathcal{N}_{h}\phi=c_{\gamma}^{h}\phi-\gamma\otimes\phi\). Here, \((\cdot,\cdot)_{h,\bar{h}}\) denotes a mass-lumped \(L^{2}(\varOmega)\) and \(L^{2}(\varOmega_{I})\) inner product, respectively, and
\[(\gamma\mathbin{\hbox{\hbox to 0.0pt{\hbox{\lower 4.0pt\hbox{$\sim$}}}\hbox{ \raise 0.0pt\hbox{$\circ$}}}}\phi)(x)=\int_{\varOmega\cup\varOmega_{I}}I_{h}^{y} \left[\gamma(x,y)\phi(y)\right]\mathrm{d}y,\quad c_{\gamma}^{h}(x)=\int_{ \varOmega\cup\varOmega_{I}}I_{h}^{y}\left[\gamma(x,y)\right]\mathrm{d}y,\]
where \(I_{h}^{y}\left[\cdot\right]\) denotes a nodal interpolant with respect to \(y\).
Then, given \(\theta_{h}^{0}\in S_{\varOmega}^{h}\), \(u_{h}^{0}\in S_{\varOmega\cup\varOmega_{I}}^{h}\) we seek \(\theta_{h}^{k},w^{k},\lambda^{k}\in S_{\varOmega}^{h}\) and \(u_{h}^{k}\in S_{\varOmega\cup\varOmega_{I}}^{h}\), such that for \(k=1,\ldots,K\) it holds
\[(\theta_{h}^{k},\phi)_{h}+\tau D(\nabla\theta_{h}^{k},\nabla\phi) -L(u_{h}^{k},\phi)_{h}=(\theta_{h}^{k-1}+Lu_{h}^{k-1},\phi)_{h}, \forall\phi\in S_{\varOmega}^{h},\] \[\mu(u_{h}^{k},\psi)_{h}+\tau(w_{h}^{k},\psi)_{h}+\beta\tau(\nabla w _{h}^{k},\nabla\psi)=\mu(u_{h}^{k-1},\psi)_{h}, \forall\psi\in S_{\varOmega}^{h},\] \[b_{h}(u_{h}^{k},\zeta)-(c_{F}u_{h}^{k}+w_{h}^{k}+c_{F}m(\theta_{h }^{k-1})-\lambda_{h}^{k}-c_{F}/2,\zeta)_{h}=0, \forall\zeta\in S_{\varOmega\cup\varOmega_{I}}^{h},\] \[\lambda_{h,+}^{k}(x_{j})(u_{h}^{k}(x_{j})-1)=0,\quad\lambda_{h,-} ^{k}(x_{j})u_{h}^{k}(x_{j})=0, \forall x_{j}\in\mathcal{J}_{h}^{i},\] \[\lambda_{h}^{k}=\lambda_{h,+}^{k}-\lambda_{h,-}^{k},\quad\lambda_ {h,\pm}^{k}\geq 0,\quad 0\leq u_{h}^{k}\leq 1.\]
For the solution algorithm we adapt a primal-dual-active set strategy [34] that has been already successfully applied in local and nonlocal settings [9, 13, 14]. Additionally, employing an explicit discretization of the convolution term, \(b_{h}(u_{h}^{k},\zeta)\approx(c_{\gamma}^{h}u_{h}^{k},\zeta)_{h}-(\gamma \otimes u_{h}^{k-1},\zeta)_{h}\) can further speed-up computations, and in the Allen-Cahn case \((\beta=0)\) can completely avoid a solution of a nonlinear phase-field system, since \(u_{h}^{k}\) can be directly evaluated from a discrete version of the projection formula (4.8). That is, for a given \(\theta_{h}^{k-1}\) and \(u_{h}^{k-1}\), \(K=1,\ldots,K\), \(x_{j}\in\mathcal{J}_{h}^{i}\) it holds
\[u_{h}^{k}(x_{j}):=P_{[0,1]}\left(\frac{g_{h}^{k}(x_{j})}{\mu/\tau+c_{\gamma}^{ h}(x_{j})-c_{F}}\right),\quad g_{h}^{k}:=\frac{\mu}{\tau}u_{h}^{k-1}+\gamma \mathbin{\hbox{\hbox to 0.0pt{\hbox{\lower 4.0pt\hbox{$\sim$}}}\hbox{ \raise 0.0pt\hbox{$\circ$}}}}u_{h}^{k-1}+c_{F}m(\theta_{h}^{k-1})-\frac{c_{F}}{2}.\]
### Numerical examples
We set \(\varOmega=(0,1)^{n}\), \(n\in\{1,2\}\), which is discretized with a uniform mesh of a mesh size \(h\). We consider a polynomial type nonlocal kernel:
\[\gamma(x-y)=\begin{cases}\varepsilon^{2}C(\delta)\max\left(0,1-\frac{|x-y|^{2} }{\delta^{2}}\right),\quad\text{if}\;|x-y|\leq\delta,\quad\delta>0,\\ 0,\quad\text{otherwise},\end{cases}\]
where \(C(\delta)\) is chosen such that \(\int_{\mathbb{R}^{n}}|\zeta|^{2}\gamma(|\zeta|)\,\mathrm{d}\zeta=2n\varepsilon^ {2}\), thus, \(C(\delta)=15/(2\delta^{3})\) for \(n=1\) and \(C(\delta)=24/(\pi\delta^{4})\) for \(n=2\). The kernel is additionally scaled by \(\varepsilon>0\) as in (2.4) to keep resemblance to the corresponding local model for vanishing nonlocal interactions. Then, for all \(x\in\varOmega\) the constant \(c_{\gamma}\) in (2.3) can be computed exactly using \(c_{\gamma}=\frac{2\pi^{n/2}}{T(n/2)}\int_{0}^{\delta}|\xi|^{n-1}\hat{\gamma}(| \xi|)\,\mathrm{d}\xi\), which leads to \(c_{\gamma}=10\varepsilon^{2}\delta^{-2}\) (\(n=1\)) and \(c_{\gamma}=12\varepsilon^{2}\delta^{-2}\) (\(n=2\)). The coupling term \(m(\theta)\) is chosen as in (2.11) together with \(c_{F}=1/6\) in order to match the condition \(F(0,\theta)-F(1,\theta)=m(\theta)/6\); see [35].
**Example 1**.: We set the model parameters to \(\mu=0.0012\), \(\theta_{e}=1\), \(\alpha=0.9\), \(\rho=20\), \(L=0.5\), \(D=1\), \(\varepsilon=0.02\). The final time is set to \(T=0.05\), \(\tau=0.0003\) and \(h=0.0024\). We chose \(\beta=0.02\) and set a nonlocal interaction radius to \(\delta=0.1540\), which corresponds to \(\xi=c_{\gamma}-c_{F}=0.002\). The initial conditions are set to:
\[u^{0}(x)=\begin{cases}1,&x\leq 0.2\\ 0,&\text{otherwise},\end{cases}\qquad\qquad\theta^{0}(x)=0.\]
The corresponding snapshots of the solutions at different time instances are presented on Figure 2. We can observe that the nonlocal model delivers sharp interfaces (up to two grid points per interface) compared to the corresponding local model, where the interface is diffuse. Furthermore, the sharpness of the interface in the
nonlocal solution remains during a whole time evolution. We also investigate the effect of the parameter \(\beta>0\) on the solution. Using the same settings as above on Figure 3 we plot the snapshots of the nonlocal solutions for different values of \(\beta\). As expected, we observe that the thickness of the interface decreases with increasing \(\beta\).
**Remark 6.1**.: _We notice that the speed of the interface differs with respect to \(\beta\). This suggest that in order to correctly match the speed of the interface with, e.g., nonlocal or local Allen-Cahn type models the remaining model parameters must be scaled appropriately with respect to \(\beta\). The question of the interplay of the model parameters and nonlocal parameters (such as those \(\delta\), \(\beta\) or \(c_{\gamma}\)) to match certain physical properties of the solution, such as e.g., a speed of the interface or mass, is more complex and will be detailed in our forthcoming work [12]._
**Example 2**.: We investigate numerically the effect of the nonlocal parameter \(\delta\) on the solution. We set \(h=0.0012\), \(\beta=0.08\) and keep the remaining settings apart from \(\delta\) as in Example 1. On Figure 3 (right) we plot the snapshots of the phase-field solution for different values of \(\delta\). As expected, we observe for decreasing \(\delta\) or equivalently increasing \(\xi\) the interface becomes more diffuse and the nonlocal solution converges to the corresponding local solution.
Figure 3. Snapshots of the temperature (left) and phase-field (middle) solutions of the nonlocal model at \(t=0.017\) for different values of \(\beta\) (Example 1). Right: Snapshots of the nonlocal and the local solutions at \(t=0.0037\), \(\beta=0.08\) (zoomed-in) (Example 2).
Figure 2. Snapshots of the local (red) and nonlocal (blue) solutions of the temperature (top) and phase-field variable (bottom) at \(t_{k}=[0,0.0013,0.0163]\) (from left to right).
**Example 3**.: Lastly, we consider a two-dimensional example. This example is inspired by an example in [35], which corresponds to solidification of pure materials, where we have a solid region around the boundaries of the domain and a pool of liquid in the interior. As the time evolves, the solidification that occurs from the walls propagates inward until all liquid solidifies. We set the model parameters to \(\mu=0.0003\), \(\theta_{e}=1\), \(\alpha=0.9\), \(\rho=10\), \(L=0.5\), \(D=1\), \(\varepsilon=0.01\). For discretization we use \(T=0.03\), \(\tau=0.0001\), \(h\approx 0.0048\), and we also set \(\beta=0.002\) and \(\delta=0.0826\), which corresponds to \(\xi=0.0093\). The initial conditions are
\[u^{0}(x,y)=\begin{cases}1,&0.1<x<0.9,\quad 0.1<y<0.9\\ 0,&\text{otherwise},\end{cases}\qquad\qquad\theta^{0}(x,y)=0. \tag{6.1}\]
Phase-field and temperature solutions are presented on Figure 4-5, where we also include solutions of the local model with the regular potential (2.7). We observe that the nonlocal model with \(\beta>0\) delivers the solution with the sharpest interface, whereas the most diffuse interface occurs in the local model with the regular potential. The corresponding width of the interface region is depicted on Figure 6. We observe that the interface width in the nonlocal Cahn-Hilliard case spans approximately \(1-2\) grid cells, while in the nonlocal and local Allen-Cahn solutions this varies between \(16-18\) and \(18-20\) grid cells, respectively. We also notice that the interface thickness affects the speed of the interface which is different for all solutions, and is explained by the fact that the same set of parameters is used for all models (cf. Remark 6.1).
## 7. Conclusion
In this work we have analyzed nonlocal phase-field models of Cahn-Hilliard and Allen-Cahn type coupled to a temperature evolution equation. The novel model based on the non-mass conserving nonlocal Cahn-Hilliard equation can provide sharp interfaces in the solution that can evolve in time which can be contrasted with the nonlocal Allen-Cahn setting that allows for sharp interfaces only in a steady-state solution. We have provided a detailed analysis of the well-posedness
Figure 4. Snapshots of the phase-field solutions at \(t=[0.002,0.008,0.015]\) (from top to bottom). From left to right: nonlocal model with an obstacle potential for \(\beta=0.002\) (first column) and \(\beta=0\) (second column), local (\(\beta=0\)) model with an obstacle potential (third column) and regular potential (fourth column).
of the models and proved the conditions under which sharp interfaces are attained. We have presented several numerical results that illustrate the theoretical findings.
The results of this work show that the developed nonlocal phase-field model is a promising approach to better describe non-isothermal phase-transitions with sharp or very thin interfaces. Our future work [12] will focus on the numerical investigation of the new model in the context of solidification of pure materials, its performance and comparison to some existing models.
## 8. Acknowledgments
The author would like to express sincere thanks to Stephen DeWitt, Max Gunzburger and Balasubramaniam Radhakrishnan for numerous valuable discussions about modeling aspects in the context of solidification.
| ```
相関場モデルは、計算物理学において、複数相を持つ物質の複雑な動的特性を記述する上で人気であり、さまざまなアプリケーションで広く用いられています。私たちは、Cahn-Hilliard と Allen-Cahn タイプの非地方的非熱力学的な相関場モデルについて、非滑らかな二重Well 障害点のポテンシャルを導入しました。数学的には、弱い形式では、モデルは、温度演化方程式に結合した、変分不等式系へと翻訳されます。特定の条件下で、非地方的なオペレータの適切な選択を可能にすることで、時間発展する鋭い界面を持つモデルを導き出すことが可能であり、これは多くのアプリケーションで望ましい特性です。このことは、鋭い界面を resolved することができない diffuse-interface 型のローカルモデルと対照的になります。私たちは、モデルのWell-posedness を分析し、適切な数値ディ |
2302.08943 | Long Range Object-Level Monocular Depth Estimation for UAVs | Computer vision-based object detection is a key modality for advanced
Detect-And-Avoid systems that allow for autonomous flight missions of UAVs.
While standard object detection frameworks do not predict the actual depth of
an object, this information is crucial to avoid collisions. In this paper, we
propose several novel extensions to state-of-the-art methods for monocular
object detection from images at long range. Firstly, we propose Sigmoid and
ReLU-like encodings when modeling depth estimation as a regression task.
Secondly, we frame the depth estimation as a classification problem and
introduce a Soft-Argmax function in the calculation of the training loss. The
extensions are exemplarily applied to the YOLOX object detection framework. We
evaluate the performance using the Amazon Airborne Object Tracking dataset. In
addition, we introduce the Fitness score as a new metric that jointly assesses
both object detection and depth estimation performance. Our results show that
the proposed methods outperform state-of-the-art approaches w.r.t. existing, as
well as the proposed metrics. | David Silva, Nicolas Jourdan, Nils Gählert | 2023-02-17T15:26:04 | http://arxiv.org/abs/2302.08943v1 | # Long Range Object-Level Monocular Depth Estimation for UAVs
###### Abstract
Computer vision-based object detection is a key modality for advanced Detect-And-Avoid systems that allow for autonomous flight missions of UAVs. While standard object detection frameworks do not predict the actual depth of an object, this information is crucial to avoid collisions. In this paper, we propose several novel extensions to state-of-the-art methods for monocular object detection from images at long range. Firstly, we propose Sigmoid and ReLU-like encodings when modeling depth estimation as a regression task. Secondly, we frame the depth estimation as a classification problem and introduce a Soft-Argmax function in the calculation of the training loss. The extensions are exemplarily applied to the YOLOX object detection framework. We evaluate the performance using the Amazon Airborne Object Tracking dataset. In addition, we introduce the Fitness score as a new metric that jointly assesses both object detection and depth estimation performance. Our results show that the proposed methods outperform state-of-the-art approaches w.r.t. existing, as well as the proposed metrics.
Keywords:Monocular Depth Estimation Unmanned Aerial Vehicles Detect-And-Avoid Object-level Amazon Airborne Object Tracking Dataset Long Range Detection
## 1 Introduction
Within recent years, significant technological progress in Unmanned Aerial Vehicles (UAVs) was achieved. To enable autonomous flight missions and mitigate the risk of in-flight collisions, advanced Detect-And-Avoid (DAA) systems need to be deployed to the aircraft. By design, these systems shall maintain a _well-clear volume_ around other airborne traffic [1]. As a result, DAA systems are required to reliably detect potential intruders and other dangerous objects at a long range to allow for sufficient time to plan and execute avoidance maneuvers. Specifically in small and lightweight UAVs, the usage of Lidar and Radar systems is challenging due to their power consumption, weight, and the required long range detection capabilities. However, computer vision approaches based on monocular images have proved their effectiveness in related use cases such as autonomous driving [7, 22, 5]. In addition, cameras can be equipped with lenses
that employ different focal lengths depending on the application. Camera systems are therefore a powerful base modality for the perception stack of small and lightweight UAVs.
Depending on the actual use case and application, engineers might choose from several computer vision-related tasks such as image classification, object detection, or semantic segmentation. Those tasks are nowadays usually solved by Convolutional Neural Networks (CNNs) specifically designed for the selected use case. For vision-based DAA systems, single-stage object detection frameworks like SSD [28] or YOLO [31, 32, 16] are often employed to detect the objects of interest. By default, these frameworks detect objects in the two-dimensional image space by means of axis-aligned, rectangular bounding boxes. To reliably detect and avoid potentially hazardous objects, additional information about their three-dimensional position and trajectory is crucial. This capability, however, is missing in most vanilla object detection frameworks and specific extensions are needed to provide it.
In this paper, we specifically address the problem of object-level depth estimation based on monocular images for long range detections in the use case of UAVs. Several studies focusing on monocular 3D object detection have been conducted in autonomous driving [22, 5, 34, 14, 15]. For UAV-related use cases, however, object-level monocular depth estimation at long range is not yet widely researched. The two fields of application, UAVs and autonomous driving, differ in two major aspects: 1. The range of the objects. UAVs are required to keep a well clear volume of at least 2000 ft or approximately 600 m to prevent potential mid-air collisions [1]. In autonomous driving, on the other hand, objects are mostly limited to less than 200 m [17, 14, 34, 5, 22, 35]. 2. Knowledge of the full 9 degrees of freedom 3D bounding box is not required to maintain the well clear volume. The distance is sufficient. In addition to simplifying the task, this aspect greatly eases the annotation process. As objects do not require a fully annotated 3D bounding box, one can can save both time and money.
Thus, we summarize our contributions as follows: 1. We propose two encodings, Sigmoid and ReLU-like, to improve long range depth estimation modeled as a regression task. 2. We frame the task of depth estimation as a classification problem and introduce Soft-Argmax based loss functions to improve the performance of monocular depth estimation. 3. We introduce a novel _Fitness Score_ metric to assess the quality of depth estimation on object-level combined with the object detection metrics. 4. We demonstrate the extension of the state-of-the-art YOLOX object detection framework and benchmark the proposed approaches against existing methods applied to long range detections.
## 2 Related Work
The problem of depth estimation from monocular RGB images has been the subject of intense academic research in the past decade. Due to the ambiguity between an object's size and the object's distance to the camera, it is mathematically an ill-posed problem [24, 14]. Thus, machine learning approaches, specifi
cally ones that rely on CNNs, gained traction in this field. Two research streams can be identified in monocular depth estimation: 1. _Dense_ or _Pixel-level_ depth estimation, which estimates a dense map of distances for every pixel of a given image, and 2. _Object-level_ depth estimation, which estimates distances only for detected objects of interest. While dense depth estimation is more prominent in computer vision literature, 3D object detection is gaining popularity in relevant application domains such as environment perception for autonomous driving [22, 5, 34, 14, 15]. Nevertheless, there's limited related work in the domain of 2D object-level depth estimation at long ranges [18].
In the case of depth prediction and corresponding loss functions, we distinguish between continuous regression approaches in contrast to approaches that rely on discretization of the depth estimation.
#### 2.0.2 Continuous Regression
The reverse Huber loss (berHu) is employed in [23] to model the value distribution of depth predictions as a continuous regression problem for dense depth estimation. [18] uses the L2 loss for training a continuous, object-level depth regressor. The log-distance is used within the loss calculation to scale larger distance values.
#### 2.0.3 Depth Discretization
[6] formulates depth estimation as a classification problem by discretizing the depth values into intervals. The cross-entropy (CE) loss is used to train a classifier that assigns a depth bin to every pixel for dense depth estimation. [26, 25] use a soft-weighted sum inference strategy to compute final depth predictions based on the softmax predictions scores of the depth bins. [13] proposes an ordinal regression loss function to learn a meaningful inter-depth-class ordinal relationship. The depth intervals for discretization are growing in width for increasing distance to the camera as the uncertainty about the true depth increases as well. [8] extends on the idea of using ordinal regression for depth estimation by using a softmax-like function to encode the target vector for depth classification as a probability distribution.
## 3 Methodology
In this section, we give information on YOLOX as the selected base framework for 2D object detection. We outline the mathematical foundation of the different depth estimation strategies and embed our proposed methods into these paradigms. In addition, we introduce the Fitness score metric in detail.
### YOLOX - Base Framework for 2D Object Detection
To tackle the problem of depth estimation at the object level, we start with a pure 2D object detection framework that outputs a confidence score, class label, and 2D bounding box for each object using axis-aligned rectangles. Given our use case, in which the trade-off between inference speed and detection performance
is of high importance, we choose YOLOX Tiny [16] as the base object detection framework. YOLOX was released in 2021 and is one of the latest advances within the YOLO family [31, 32].
To allow for object-level depth estimation, we create a separate new head dedicated to depth estimation. The architecture of the depth head is based on the existing classification head with the necessary adjustments to the number of output channels in the last layer. This separation between the depth head and the other heads allows for a modular combination of various outputs.
While we have selected YOLOX as the foundation for this work, the ideas presented in the following sections can be carried over to other modern 2D object detectors.
### Depth Regression
The most natural way to estimate depth \(d\) is to frame it as a continuous regression problem. In this case, the model is trained to predict a single and continuous value by minimizing the distance between the model predictions \(\hat{y}\) and the ground truth target \(y\). In its simplest form, depth can be regressed directly, _i.e._\(y=d\) and \(\hat{y}=\hat{d}\). This simple model, however, allows for negative distances, which are not meaningful in the context of monocular depth estimation. Thus, we can use a differentiable transfer function, \(g\left(x\right)\), which supports us in encoding and decoding the network output given a set of constraints. To avoid negative predictions, we propose the encoding
\[g\left(x\right)=\frac{x-b}{a}. \tag{1}\]
Its corresponding decoding can be calculated as:
\[g\left(x\right)^{-1}=\max\left(d_{\min},a\cdot x+b\right), \tag{2}\]
with \(a\) and \(b\) being hyperparameters that allow for better control over the domain and range of the model outputs. As \(g^{-1}\) follows the ReLU structure, we refer to this approach as the ReLU-like encoding. We argue that designing a differentiable transfer function with this constraint not only eases training but also enhances robustness against out-of-distribution objects [4] or adversarial attacks.
Besides direct regression, there are encodings based on non-linear transfer functions _e.g._, the inverse \(g\left(x\right)=\frac{1}{x}\)[15] and the logarithm \(g\left(x\right)=\log x\)[10, 9, 24, 3, 18]. All previously mentioned encodings lack an upper bound. Thus, they allow for any positive number to be predicted as the depth of the object. As a result, the calculated loss is also unbound and may cause instabilities during training. In some use cases, however, it is possible to assign an upper bound or a maximum distance to the objects of interest. For those settings, we propose a bounded transfer function that maps the domain \(\left(d_{\min},d_{\max}\right)\) to the range \(\left(-\infty,+\infty\right)\):
\[g\left(x\right)=\text{logit}\left(\frac{x-d_{\min}}{d_{\max}-d_{\min}}\right). \tag{3}\]
The corresponding decoding operation, based on the sigmoid function \(\sigma\), is then calculated as:
\[g\left(x\right)^{-1}=\left(d_{\max}-d_{\min}\right)\sigma\left(x\right)-d_{\min}, \tag{4}\]
where \(d_{\max}\) and \(d_{\min}\) are the maximum and the minimum depth, respectively. As \(g^{-1}\) uses the sigmoid function, we refer to this approach as the Sigmoid encoding.
### Depth Bin Classification
Depending on the application and the use case, a coarse depth estimation might be sufficient. In such cases, depth estimation can be framed as a multiclass classification task with \(K\) discretized depth intervals \(\{d_{0},d_{1},...,d_{K-1}\}\)[6]. Each depth interval links to an individual class in the classification paradigm. Relaxing the need for fine-grained and continuous depth estimation also eases the process of ground truth generation and data annotation. This, in return, can be beneficial from a business perspective.
During training, in a classification setting, the softmax function is typically used in CNNs to compute the pseudo-probability distribution and is paired with CE loss. At test time, the selected depth bin is obtained by using the argmax over the pseudo-probabilities. Reformulating depth estimation as a simple classification task is straightforward. In our experiments, we will use this approach as the baseline for classification-based approaches.
Employing CE, however, models the classes - and thus the depth bins - as being independent of each other. In particular, the default CE loss doesn't penalize predictions more if they are further away from the target bin compared to predictions that are closer to the target.
Depth bins, however, are ordered. We naturally would consider predictions that are far away from the actual depth as _more wrong_ compared to closer ones. Thus, we propose to design a loss that considers the distance of the predicted depth bin to the target depth bin.
Designing a loss based on the distance between the prediction and ground truth implies the knowledge of the argmax of the predicted depth classes. Once the argmax and ground truth is known, an arbitrary distance loss function _e.g._, Smooth L1 or MSE, can easily be computed. The implementation of this approach, however, renders a challenge as the default argmax function is not differentiable. Thus, we replace it with the Soft-Argmax [12, 20]
\[\text{Soft-Argmax}\left(\hat{y},\beta\right)=\sum_{i=0}^{K-1}i\cdot\text{ softmax}\left(\beta\hat{y}\right)_{i} \tag{5}\]
where \(\beta>0\) is a parameter that scales the model predictions \(\hat{y}\). The larger the \(\beta\), the more it approximates a one-hot encoded argmax. In our experiments, we found \(\beta=3\) to be a good choice. Soft-Argmax provides an approximated bin index that is used to compute a distance loss between it and the target bin.
During inference, we can naturally obtain the predicted depth bin, \(\hat{d}_{i}\), by applying the argmax function to the model output, \(\hat{y}\), and set the depth value, \(\hat{d}\), to its center.
### Fitness Score
As described previously, depth estimation can be formulated as a regression or a classification task. A natural choice for a metric capable of assessing the quality of depth estimation is the mean absolute localization error [14].
If depth estimation is framed as a classification task, the predicted depth is by default not a continuous number and depends on the _real_ depth assigned to this bin _e.g._, its center. As a result, predictions might cause a large absolute localization error despite being assigned to the proper depth bin. This effect makes it difficult to compare both regression and classification models.
To solve this challenge, we suggest to also discretize the network prediction in the case of a regression model into \(K\) bins and apply a metric suitable for classification tasks. By doing so, we are able to compare both regression and classification models. Finally, this approach also simplifies the proper model selection by a single metric across the different depth estimation paradigms.
As the network predicts confidence, class label, bounding box parameters as well as the depth, we effectively have set up a multitask network. Thus, we need to be able to assess both depth estimation as well as standard 2D object detection performance. Assessing the performance of a multitask network is challenging as all included tasks might perform and be weighted differently. We, however, favor a single number summarizing the model performance in all tasks.
In typical object detection benchmarks, mean Average Precision (mAP) is commonly used [11]. mAP measures both the classification performance as well as the bounding box localization quality by utilizing the Intersection-over-Union (IoU) of the predicted and the ground truth bounding box as an auxiliary metric. In addition, F1 jointly assesses both precision and recall in a single number. Note that because several properties of the object - _e.g._ its size and full 3D bounding box - are unknown and thus not needed for our use case, metrics commonly used in 3D object detection - _e.g._ AP\({}_{\text{3D}}\)[17] and DS [14] - are not suitable.
As we propose to calculate the performance of depth estimation as a classification task, we suggest employing a scheme similar to F1 for depth estimation as well.
Eventually, we calculate the joint measure as the harmonic mean between the mean F\({}_{1}\)-Score of the object detector, mF\({}_{1}^{\text{OD}}\), and the mean F\({}_{1}\)-Score of the depth estimation, mF\({}_{1}^{\text{DE}}\), given the detected objects. As the harmonic mean between two numbers is the same as the F\({}_{1}\)-Score, we refer to this metric as F\({}_{1}^{\text{Comb}}\).
The mean F\({}_{1}\)-Scores for both object detection as well as for depth estimation are dependent on the confidence threshold \(t_{c}\) as well as on the minimum required IoU threshold \(t_{\text{IoU}}\). It is \(t_{c}\in\{0.00,0.01,...,0.99,1.00\}\). All predictions with confidence below this threshold will be discarded. For \(t_{\text{IoU}}\), we obtain the values according to [27] such that \(t_{\text{IoU}}\in\{0.50,0.55,...,0.90,0.95\}\). Predictions with an IoU \(\geq t_{\text{IoU}}\) will be treated as TP. Predictions with an IoU \(<t_{\text{IoU}}\) are assumed to be FP.
Finally, it is
\[\mathrm{mF}_{1}^{\mathrm{OD}} =\mathrm{mF}_{1}^{\mathrm{OD}}(t_{c},t_{\mathrm{IoU}}) \tag{6}\] \[\mathrm{mF}_{1}^{\mathrm{DE}} =\mathrm{mF}_{1}^{\mathrm{DE}}(t_{c},t_{\mathrm{IoU}})\] (7) \[\mathrm{F}_{1}^{\mathrm{Comb}} =\mathrm{F}_{1}^{\mathrm{Comb}}(t_{c},t_{\mathrm{IoU}})\] (8) \[=\frac{2\cdot\mathrm{mF}_{1}^{\mathrm{OD}}\cdot\mathrm{mF}_{1}^{ \mathrm{DE}}}{\mathrm{mF}_{1}^{\mathrm{OD}}+\mathrm{mF}_{1}^{\mathrm{DE}}}. \tag{9}\]
The domain of the combined score \(\mathrm{F}_{1}^{\mathrm{Comb}}\) is \([0,1]\) with higher values representing a better combined performance. As the combined score still depends on both \(t_{c}\) and \(t_{\mathrm{IoU}}\), we distill it into a single value, which we refer to as _Fitness_ score. We define it as the maximum of the combined score over all confidence and IoU thresholds,
\[\mathrm{Fitness}=\max_{t_{c},t_{\mathrm{IoU}}}\mathrm{F}_{1}^{ \mathrm{Comb}}. \tag{10}\]
By doing so, we are able to assess the model performance _as is_ when productively deployed.
## 4 Experiments
To demonstrate the effectiveness of the proposed methods for long range object-level monocular depth estimation we design several experiments and compare them using the Amazon Airborne Object Tracking (AOT) dataset.
We split the experiments into three major groups: regression, bin classification, and ordinal regression. Each group formulates the depth estimation task differently, implying different network architectures and loss functions. Eventually, we evaluate the performance of each experiment. We use 2D mAP as well as the mean absolute localization error (MALE) as individual metrics for object detection and depth estimation, respectively. In addition, we assess the quality of each tested approach using the joint Fitness Score.
### Dataset
The Amazon AOT dataset was introduced in 2021 as part of the Airborne Object Tracking Challenge [2]. It contains a collection of in-flight images with other aircraft flying by as planned encounters. Planned objects are annotated with a 2D bounding box (in pixels), the object label, and the distance (in meters) from the camera to a specific object. As the metadata only contains the euclidean distance from the camera without splitting it into \(x\), \(y\), and \(z\), we use the terms _distance_ and _depth_ interchangeably within this study. Additionally, the sequences may contain encounters with unplanned objects. Those objects are annotated with bounding box parameters and their specific class label - the distance, however, is unknown.
While most other datasets that feature object-level depth annotations mostly focus on autonomous driving and only contain objects up to 200 m [17, 14, 34, 5, 22, 35], the AOT dataset features objects up to several hundreds of meters. In our experiments, we use the _partial_ dataset as provided by the authors. This subset contains objects up to 700 m away from the camera. We have observed that some objects, specifically with a range below 10 m, are mislabeled with respect to the object's distance. Thus, we removed the range annotation for these objects but kept the bounding box and classification labels so that they can still be used for training the object detector. The images in the flight sequences are collected at a rate of 10 Hz. As such, many of the images tend to be quite similar. With the goal of shortening training time without significant degradation of performance, we use only every 5th image of this dataset. This subset is equivalent to 2 Hz or 20 % of the initial dataset. We further split the flight sequences in the dataset into dedicated sets for training (60 %), validation (20 %), and testing (20 %). By splitting the entire dataset on a sequence level, we ensure that no cross-correlations between training, validation, and test set occur. Our selected split also provides similar distance as well as class distributions as depicted in Figures 1 and 2. Sample images of the dataset including typical objects are shown in Figure 3.
### Experimental Setup
Our models are trained on \(2464\times 2464\) px images. We upsample and slightly stretch the original images with a resolution of \(2448\times 2048\) px to the target resolution as the network requires squared input images with dimensions multiple of 32. We use Stochastic Gradient Descent (SGD) as our optimizer and combine it with a cosine learning rate scheduler with warm-up [16]. In total, we train for 15 epochs with a batch size of 14 using 2 Nvidia RTX3090 GPUs.
As described previously, our network architecture features a de-facto multi-task setting. Thus, we calculate our overall multitask loss function \(\mathcal{L}\) as:
\[\mathcal{L}=\mathcal{L}_{\mathrm{OD}}+w_{\mathrm{DE}}\mathcal{L}_{\mathrm{DE}}. \tag{11}\]
Figure 1: Distance distribution for all objects within _train_, _val_, and _test_ training set.
Accordingly, the detector loss function, \(\mathcal{L}_{\mathrm{OD}}\), is defined as:
\[\mathcal{L}_{\mathrm{OD}}=w_{\mathrm{obj}}\mathcal{L}_{\mathrm{obj}}+w_{\mathrm{ loc}}\mathcal{L}_{\mathrm{loc}}+w_{\mathrm{class}}\mathcal{L}_{\mathrm{class}}, \tag{12}\]
with \(\mathcal{L}_{\mathrm{obj}}\) being the objectness, \(\mathcal{L}_{\mathrm{loc}}\) the localization, and \(\mathcal{L}_{\mathrm{class}}\) the classification loss. \(w_{\mathrm{obj}}\), \(w_{\mathrm{loc}}\), and \(w_{\mathrm{class}}\) refer to the corresponding balancing weights. We leave the detector loss function from YOLOX [16] unchanged. Thus, it is \(w_{\mathrm{obj}}=1\), \(w_{\mathrm{loc}}=5\), and \(w_{\mathrm{class}}=1\). We conduct experiments with different depth loss functions, \(\mathcal{L}_{\mathrm{DE}}\). At the same time, the depth weight \(w_{\mathrm{DE}}\) is a hyperparameter.
#### 3.2.2 Regression
Our first set of experiments frames depth estimation as a regression task. As such, we set the number of output channels for the last convolutional layer of our depth estimation head to 1. As described in section 3.2.2, there are different methods of encoding depth information. Moreover, each encoding can be combined with different distance-based loss functions.
Figure 3: Sample images of the Amazon AOT dataset [2].
Figure 2: Class distribution for all objects within _train_, _val_, and _test_ training set, log-scaled to improve visibility.
As mentioned in section 4.1, the distance of the objects to the camera is at most \(700\,\mathrm{m}\). Therefore, we parameterize the Sigmoid encoding such that it is defined in the domain \((d_{\min},d_{\max})\rightarrow(0,700)\).
Similarly, for the ReLU-like encoding, we obtain the best results when defining the hyperparameters \(a\) and \(b\) in a way that it approximates the Sigmoid encoding: \(a=100\) and \(b=\frac{700}{2}\).
For the depth loss function, \(\mathcal{L}_{\text{DE}}\), we use Smooth L1 (SL1) [19] and mean squared error (MSE) loss for each encoding:
\[\text{SL1}(y,\hat{y}) =\begin{cases}\frac{1}{2N}\sum_{i=1}^{N}\left(y_{i}-\hat{y_{i}} \right)^{2},&\text{if}\;\left|y_{i}-\hat{y_{i}}\right|\leq 1\\ \frac{1}{N}\sum_{i=1}^{N}\left|y_{i}-\hat{y_{i}}\right|-0.5,&\text{otherwise} \end{cases} \tag{13}\] \[\text{MSE}(y,\hat{y}) =\frac{1}{N}\sum_{i=1}^{N}\left(y_{i}-\hat{y_{i}}\right)^{2}. \tag{14}\]
In addition, we follow [23] and combine direct depth regression with the reverse Huber (berHu) loss:
\[\text{berHu}(y,\hat{y},c)=\begin{cases}\frac{1}{N}\sum_{i=1}^{N}\left|y_{i}- \hat{y_{i}}\right|,&\text{if}\;\left|y_{i}-\hat{y_{i}}\right|\leq c\\ \frac{1}{N}\sum_{i=1}^{N}\frac{\left(y_{i}-\hat{y_{i}}\right)^{2}+c^{2}}{2c},&\text{otherwise}.\end{cases} \tag{15}\]
\(\hat{y}\) refers the model prediction and \(y\) is the target. \(c\) is a pseudo-constant that is originally calculated as a function, \(c(y,\hat{y})=\frac{1}{5}\underset{i}{\max}\left(\left|y_{i}-\hat{y_{i}}\right|\right)\)[23]. \(N\) refers to the overall number of predictions.
#### 4.2.2 Bin Classification
The second set of experiments models depth estimation as a classification task. The depth interval \((d_{\min},d_{\max})\rightarrow(0,700)\) is uniformly discretized into \(K=7\) bins with a uniform bin width of \(100\,\mathrm{m}\). Choosing the proper bin size is rather subjective and highly dependent on the use case. For our use case, we find that \(100\,\mathrm{m}\) is suitable since the environment is less cluttered and objects are found at larger distances when compared to other similar applications, \(e.g.\) autonomous driving, where smaller bin sizes might be desired. Similarly, and in agreement with [29], we choose uniform discretization over a log-space discretization strategy because the latter increases bin sizes at larger distances where most objects are found. Moreover, for our use case, early detections are beneficial as we want to avoid entering other objects' airspace.
To allow the model to predict \(K\) depth bins, we change the number of output channels in the last convolutional layer of our depth estimation head to \(K\).
Our baseline experiment in this group uses softmax (\(c.f.\) section 3.3) as the final activation and CE as the loss function. In total, we design two experiments that employ the proposed Soft-Argmax (SA) with Smooth L1 and MSE loss.
#### 4.2.3 Ordinal Regression
In our last set of experiments, we follow the guidelines of [13], framing depth estimation as an ordinal regression problem. First, we
uniformly discretize the depth into 7 bins, as previously described. The number of output channels in the last convolution layer is set to \(2\cdot(K-1)\), where the number of bins, \(K\), equals 7. Finally, we reimplement the proposed loss function, applying it to objects instead of pixels.
#### 4.2.3 Metrics
We evaluate the performance of the experiments based on different metrics. The Fitness score proposed in Section 3.4 is our primary metric. To compute it, depth is once again uniformly discretized into 7 bins with a width of 100 m. During training, we search for hyperparameters that maximize the Fitness score on the validation dataset. Once optimal hyperparameters are found, we evaluate on the _test_ set and report the Fitness score as our primary metric.
Additionally, we report secondary metrics including 2D mAP with 10 IoU thresholds \(t_{\text{IoU}}\in\{0.50,0.55,...,0.90,0.95\}\) and the mean absolute localization error. We furthermore evaluate the performance w.r.t. the number of parameters, GFLOPs, inference, and post-processing times, allowing us to compare the methods in terms of computational constraints.
### Results
Table 1 summarizes the experiment results on the _test_ set. Within the depth regression methods, the proposed Sigmoid encoding outperforms all other encodings. The ReLU-like encoding performs worse compared to the Sigmoid encoding but is still competitive with the best encoding from the state-of-the-art, the logarithm. The combination Sigmoid/SL1 performs best within this group.
Within the classification methods, we observe that the proposed loss functions based on Soft-Argmax perform better than the baseline with CE loss. We obtain the best results w.r.t. Fitness by combining Soft-Argmax with Smooth L1 loss. Ordinal regression also outperforms the classification with CE loss. Our results are consistent with the results of [13]. Overall though, it is outperformed by our proposed loss functions based on Soft-Argmax.
Table 1 also shows that in most experiments, extending YOLOX with an additional depth head slightly degrades the base 2D performance by means of 2D mAP. There are notable exceptions, the combination SA/SL1 is one of them.
While the combination of Soft-Argmax and Smooth L1 loss performs best w.r.t. the Fitness score and 2D mAP, it doesn't yield the lowest absolute localization error. This can easily be understood as we select the middle point of the predicted bin as the actual distance of the object, _c.f_. Section 3.3. In particular, Table 1 shows that models using the Sigmoid and the ReLU-like encoding regression perform better in this aspect.
We attempt to further improve absolute localization in the classification setting by using bin interpolation, as a postprocessing step, instead of simply choosing the center of the bin. Following [29], we define the interpolation function \(f\) as:
\[f\left(p\left(d_{i-1}\right),p\left(d_{i}\right),p\left(d_{i+1}\right)\right) =f\left(\frac{p\left(d_{i}\right)-p\left(d_{i-1}\right)}{p\left(d_{i}\right)-p \left(d_{i+1}\right)}\right). \tag{16}\]
\(p\left(d_{i}\right)\) refers to the probability of the predicted bin, \(p\left(d_{i-1}\right)\), and \(p\left(d_{i+1}\right)\) are the probabilities of the neighboring bins. The predicted depth bin is refined using:
\[\hat{d}=\begin{cases}\hat{d}-\frac{s_{i}}{2}\cdot\left(1-f\left(x\right)\right), &\text{if }p\left(d_{i-1}\right)>p\left(d_{i+1}\right)\\ \hat{d}+\frac{s_{i}}{2}\cdot\left(1-f\left(\frac{1}{x}\right)\right),&\text{ otherwise}\end{cases} \tag{17}\]
where \(s_{i}\) is the bin size _i.e._, the width, of the predicted bin \(i\).
Any function \(f\) must shift the predicted depth towards the previous bin if \(p\left(d_{i-1}\right)>p\left(d_{i+1}\right)\), shift towards the next bin if \(p\left(d_{i-1}\right)<p\left(d_{i+1}\right)\), and leave it unchanged if \(p\left(d_{i-1}\right)=p\left(d_{i+1}\right)\). We then select the following strictly monotone functions \(f:[0,1]\rightarrow[0,1]\) depicted in Figure 4:
\[\text{Equiangular}\] [33] \[f(x)=x, \tag{18}\] \[\text{Parabola}\] [33] \[f(x)=\frac{2x}{x+1},\] (19) \[\text{SinFit}\] [21] \[f(x)=\sin\left(\frac{\pi}{2}(x-1)\right)+1,\] (20) \[\text{MaxFit}\] [30] \[f(x)=\max\left(\frac{1}{2}\left(x^{4}+x\right),1-\cos\left(\frac {\pi x}{2}\right)\right),\] (21) \[\text{SinAtanFit}^{3}\] [29] \[f(x)=\sin\left(\frac{\pi x}{2}\arctan\left(\frac{\pi x}{2}\right) \right). \tag{22}\]
\begin{table}
\begin{tabular}{l l c c c} \hline \hline
**Method** & **Loss function** & **Fitness** & **2D mAP** & **MALE** \\ \hline
2D Only & & – & 27.7 \% & – \\ \hline \multirow{3}{*}{Direct} & SL1 & 42.2 \% & 26.7 \% & 52.7 m \\ & MSE & 43.0 \% & 26.9 \% & 50.3 m \\ & berHu [23] & 43.4 \% & 24.7 \% & 49.6 m \\ \multirow{3}{*}{Inverse} & SL1 & 39.9 \% & 26.4 \% & 92.8 m \\ & MSE & 35.0 \% & 25.5 \% & 94.7 m \\ \multirow{3}{*}{Log} & SL1 & 48.2 \% & 27.0 \% & 35.5 m \\ & MSE & 46.7 \% & 26.6 \% & 38.0 m \\ \multirow{3}{*}{Sigmoid} & SL1 (Ours) & 51.6 \% & 25.7 \% & **28.9 m** \\ & MSE (Ours) & 50.4 \% & 25.3 \% & 32.4 m \\ \multirow{3}{*}{ReLU-like} & SL1 (Ours) & 48.3 \% & 27.9 \% & 33.3 m \\ & MSE (Ours) & 47.7 \% & 28.0 \% & 35.5 m \\ \hline \multirow{3}{*}{Classification} & CE & 50.9 \% & 24.9 \% & 37.9 m \\ & SA/SL1 (Ours) & **53.6 \%** & **28.5 \%** & 37.9 m \\ \cline{1-1} & SA/MSE (Ours) & 52.8 \% & 26.9 \% & 38.5 m \\ \hline \multicolumn{2}{l}{Ordinal Regression} & 52.7 \% & 27.0 \% & 37.9 m \\ \hline \hline \end{tabular}
\end{table}
Table 1: Experiment results obtained on the _test_ set. Object detection and depth estimation are jointly evaluated on the proposed Fitness score. 2D mAP and mean absolute localization error (MALE) individually evaluate object detection and depth estimation, respectively.
As shown in Table 2, all interpolation functions show improvements over the baseline. SinFit and MaxFit obtain the same results and perform the best out of our selection. Despite the improvements, it is not able to surpass the Sigmoid-encoded model. As the interpolation is part of the postprocessing and does not change the network architecture or the predicted depth bin, both Fitness score and 2D mAP remain unchanged.
#### 4.2.2 Runtime Comparison
Besides the quality of the predictions, another important aspect is how the different models compare at runtime. In Table 3, representative models are benchmarked and compared.
Compared to pure 2D object detection, the inference time increases by approx. 4 ms for all proposed methods. This is mainly caused by the increased GFLOPs coming from the additional prediction head.
Amongst the proposed methods, GFLOPs, number of parameters, and inference speed do not vary meaningfully. Looking at postprocessing though, we observe that the classification and ordinal regression models are slower than the regression models. This result is expected as there are more steps involved for both in order to transform the model output into the depth value. Moreover,
\begin{table}
\begin{tabular}{l c} \hline \hline
**Function** & **MALE** \\ \hline None (baseline) & 37.9 m \\ Equiangular & 31.1 m \\ Parabola & 32.5 m \\ SinFit & **30.1 m** \\ MaxFit & **30.1 m** \\ SinAtanFit & 34.6 m \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results of the different bin interpolation functions evaluated on mean absolute localization error (MALE).
Figure 4: Different bin interpolation functions.
classification and ordinal regression models grow in complexity with an increasing number of depth bins. Lastly, we conclude that the cost of bin interpolation is negligible.
## 5 Conclusion
In this work, we addressed the problem of long range object-level monocular depth estimation and exemplarily extended the YOLOX object detection framework. We modeled the depth estimation task as a regression, classification, and ordinal regression problem. To jointly assess object detection and depth estimation performance, we introduced the Fitness score as a novel metric. We proposed two novel encodings for regression, Sigmoid and ReLU-like. The former outperforms other state-of-the-art encodings w.r.t. Fitness score and absolute localization error, while the latter is competitive with the best encoding from the state-of-the-art. Moreover, for classification, we proposed a novel loss function based on the Soft-Argmax operation that minimizes the distance between the predicted and target depth bins. In conjunction with the Smooth L1 loss, it outperforms all other models, including ordinal regression, w.r.t. Fitness score. Furthermore, its 2D mAP performance even surpasses the baseline 2D model. However, it doesn't reach the same accuracy by means of absolute localization error compared to the proposed Sigmoid encoding - even when with bin interpolation functions. In general, regression based models have a slight advantage in postprocessing which lead to an overall faster runtime. Based on the conducted experiments, we find that our proposed methods provide great extensions to standard 2D object detection frameworks, enabling object-level depth estimation at long range.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Method** & **Parameters** & **GFLOPs** & **Inference** & **Postprocessing** \\ \hline
2D Only & 5 034 321 & 56.1 & 21.5 ms & 1.5 ms \\ \hline Sigmoid \& Smooth L1 & 5 200 690 & 66.5 & 25.9 ms & **1.7 ms** \\ SA/SL1 & 5 201 369 & 66.5 & **25.4 ms** & 2.9 ms \\ SA/SL1 \& SinFit & 5 201 369 & 66.5 & 25.5 ms & 3.0 ms \\ Ordinal Regression & 5 201 951 & 66.6 & 25.7 ms & 3.0 ms \\ \hline \hline \end{tabular}
\end{table}
Table 3: Inference benchmark results on representative models for regression, classification, classification with bin interpolation, and ordinal regression. Results measured with \(2464\times 2464\,\mathrm{px}\) image resolution, batch size 1 and FP16 using PyTorch on an Intel Core i9-10920X and Nvidia RTX3090. | コンピュータビジョンに基づく物体検出は、UAVの自律飛行ミッションを実現する高度なDetect-And-Avoidシステムにとって重要なモデリティです。標準的な物体検出フレームワークでは、対象物の実際の深さを予測していません。しかし、この情報は衝突回避に非常に重要です。この論文では、単眼物体検出方法を画像から長距離で予測する新規な拡張方法を提案します。まず、深度推定を回帰タスクとしてモデル化する際にSigmoidとReLUのようなエンコーディングを提案します。さらに、深度推定を分類問題として捉え、トレーニング損失の計算においてSoft-Argmax関数を導入します。この拡張は、YOLOX物体検出フレームワークに適用されます。YOLOX物体検出フレームワークを用いて評価を行い、Amazon Airborne Object Trackingデータセットを用いてパフォーマンスを評価します。さらに、物体検出と深度推定の性能を評価する新たなマ |
2303.07959 | Macroscopic Quantum Superpositions via Dynamics in a Wide Double-Well
Potential | We present an experimental proposal for the rapid preparation of the center
of mass of a levitated particle in a macroscopic quantum state, that is a state
delocalized over a length scale much larger than its zero-point motion and that
has no classical analog. This state is prepared by letting the particle evolve
in a static double-well potential after a sudden switchoff of the harmonic
trap, following initial center-of-mass cooling to a sufficiently pure quantum
state. We provide a thorough analysis of the noise and decoherence that is
relevant to current experiments with levitated nano- and microparticles. In
this context, we highlight the possibility of using two particles, one evolving
in each potential well, to mitigate the impact of collective sources of noise
and decoherence. The generality and scalability of our proposal make it
suitable for implementation with a wide range of systems, including single
atoms, ions, and Bose-Einstein condensates. Our results have the potential to
enable the generation of macroscopic quantum states at unprecedented scales of
length and mass, thereby paving the way for experimental exploration of the
gravitational field generated by a source mass in a delocalized quantum state. | Marc Roda-Llordes, Andreu Riera-Campeny, Davide Candoli, Piotr T. Grochowski, Oriol Romero-Isart | 2023-03-14T15:00:55 | http://arxiv.org/abs/2303.07959v3 | # Macroscopic Quantum Superpositions in a Wide Double-Well Potential
###### Abstract
We present an experimental proposal for the rapid preparation of the center of mass of a levitated particle in a macroscopic quantum state, that is a state delocalized over a length scale much larger than its zero-point motion and that has no classical analog. This state is prepared by letting the particle evolve in a static double-well potential after a sudden switchoff of the harmonic trap, following initial center-of-mass cooling to a sufficiently pure quantum state. We provide a thorough analysis of the noise and decoherence that is relevant to current experiments with levitated nano- and microparticles. In this context, we highlight the possibility of using two particles, one evolving in each potential well, to mitigate the impact of collective sources of noise and decoherence. The generality and scalability of our proposal make it suitable for implementation with a wide range of systems, including single atoms, ions, and Bose-Einstein condensates. Our results have the potential to enable the generation of macroscopic quantum states at unprecedented scales of length and mass, thereby paving the way for experimental exploration of the gravitational field generated by a source mass in a delocalized quantum state.
Over the last century, many efforts have been directed towards preparing a delocalized state of increasingly massive objects over a distance comparable to their size [1, 2, 3, 4, 5]. The preparation of such macroscopic quantum superposition states of massive particles holds significant interest across the field of quantum science [6]. Their high susceptibility to external stimuli equips them with excellent sensor capabilities. They also provide a testing ground for collapse models [7, 8, 9, 10], which predict the breakdown of the quantum superposition principle at large scales. Moreover, they could enable the direct observation of the gravitational field generated by a sufficiently large source mass in a quantum superposition state, which would shed light upon the interplay between quantum mechanics and gravity [11, 12]. The preparation of such macroscopic quantum states requires: (i) Fast experimental runs to avoid collisions with gas molecules [13, 14, 9, 15], (ii) Minimal use of laser light to avoid decoherence due to photon scattering [16, 17, 18] and internal particle heating, which critically determines decoherence due to thermal emission [9, 19], (iii) Access to nonlinearities to generate negative Wigner function states, and (iv) The ability to repeat nearly identical experimental runs quickly and with the same particle to avoid low-frequency noise, drifts, and other systematic errors.
In this paper, we propose a scheme for preparing macroscopic quantum superposition states that satisfies requirements (i)-(iv). This scheme is based on levitation and control of micro-objects in high vacuum [20]. The scheme exploits the quantum nonlinear dynamics generated in a static nonharmonic potential (e.g., double-well potential, see Fig. 1), which is assumed to be wide, time-independent, and implemented with static nonoptical fields. The dynamics is triggered after switching off a tighter harmonic potential (e.g., optical trap) where center-of-mass cooling is performed [21, 22, 23, 24, 25, 26, 27]. The harmonic potential is centered near the top of the double-well potential but sufficiently far (compared to the wavefunction size) so that the induced dynamics occurs in one of the wells only [Fig. 1(a)]. Because of the wide-ranging size of the double-well potential particle quantum tunneling is absent. This nonharmonic potential is convenient as it induces both coherent inflation [17, 28], namely an exponentially fast generation of motional squeezing via the inverted harmonic term, and non-Gaussian physics when the particle wave packet arrives at the turning point where the quartic term of the potential dominates [Fig. 1(b)]. Note that the particle will evolve very rapidly and perform a loop, returning to the initial position where the harmonic potential can be switched on again to repeat the experimental run [Fig. 1(c)]. The double-well potential is also convenient as our protocol can be extended to two particles, one evolving in each of the wells in a mirror-symmetric way [Fig. 1(d)]. This is use
Figure 1: Schematic representation of the protocol. (a) The particle is initially trapped and cooled in a harmonic potential. (b) The trap is switched off and the particle explores the nonharmonic potential, experiencing both coherent expansion and non-Gaussian physics. (c) The particle returns to the original position, allowing for repetition of the protocol. (d) Extending the protocol to two particles allows for collective noise mitigation and detection of weak interactions.
ful as the two-particle dynamics can be used to mitigate collective sources of noise and decoherence by performing differential measurements [29] as well as to detect weak long-range interactions between them [30; 31; 14].
More specifically, we consider the center-of-mass motion of a particle of mass \(m\) and focus on the motion along a given axis, described by the particle's center-of-mass position and momentum operators \(\hat{X}\) and \(\hat{P}\) fulfilling \([\hat{X},\hat{P}]=\mathrm{i}\hbar\). The possible cross-coupling to other center-of-mass degrees of freedom is assumed to add noise and decoherence, whose effect is analyzed later. For times \(t<0\) we assume that the particle is cooled to a thermal state of a harmonic potential of frequency \(\Omega\) centered at position \(X_{\mathrm{s}}\), namely \(V_{0}(X)=m\Omega^{2}(X-X_{\mathrm{s}})^{2}/2\). A thermal state is characterized by its phonon mean number occupation \(\bar{n}\). Today, it is experimentally feasible to cool a levitated dielectric nanoparticle to the ground state (\(\bar{n}<1\)) [21; 22; 23; 24; 25; 26; 27] with position and momentum zero-point fluctuations given by \(X_{\Omega}=\left[\hbar/(2m\Omega)\right]^{1/2}\) and \(P_{\Omega}=\hbar/(2X_{\Omega})\), respectively. At \(t=0\), the harmonic potential is switched off (e.g., optical trap is turned off) such that a weaker nonharmonic potential in the background (e.g., generated by electrostatic fields [32; 33; 34]) is dominant. The center-of-mass quantum dynamics generated by this nonharmonic background potential is the focus of this paper. We consider the double-well potential \(V(X)=m\omega^{2}[-X^{2}+X^{4}/(2D^{2})]/2\), parameterized by the frequency \(\omega\) and length \(D\). We remark that in absence of noise and decoherence, the induced dynamics and the corresponding generated quantum states depend on the following six parameters: \(m\), \(\Omega\), \(X_{\mathrm{s}}\), \(\bar{n}\), \(\omega\), and \(D\).
In this paper, we will focus on the quantum dynamics generated in _wide_ double-well potentials, i.e., potentials for which \(D\gg X_{\Omega}\) and \(\omega\ll\Omega\). The reason for this is that nano- and microparticles cooled to the ground state have subatomic zero-point motion fluctuations \(X_{\Omega}\ll 10^{-10}\) m and double-well potentials generated via static fields have at least micrometer-sized scales \(D\gg 10^{-7}\) m (for example, see [35] for an experimental implementation). In such wide double-well potentials we will focus on the quantum dynamics generated when the particle evolves in one of the two wells (say, the right one for \(X>0\), see Fig. 1), which requires \(X_{\mathrm{s}}\gg X_{\Omega}\). In this parameter regime, large phase-space expansions associated with the generation of large motional squeezing are expected. The numerical simulation of the Wigner function in dynamical scenarios (including noise and decoherence) where large phase-space squeezing and non-Gaussian states are generated is challenging. Hereafter, we use the numerical methods we have recently developed and presented in [36] to simulate these challenging multiscale open quantum dynamics.
Let us now show the coherent dynamics generated in these wide double-well potentials. To that end, we numerically solve the Wigner function dynamics for the sets of parameters given in Table 1 assuming \(\bar{n}=0\) for now. Let us first focus on the quantum dynamics of the first and second phase-space moments. Fig. 2(a) shows the phase-space trajectory given by the dimensionless position and momentum expected values given by \(\langle\hat{X}\rangle(t)/D\) and \(\langle\hat{P}\rangle(t)/(m\omega D)\), respectively. In these units, the trajectory is approximately the same for any
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Size & \(X_{\mathrm{s}}/D\) & \(\omega/\Omega\) & \(D/X_{\Omega}\) & \(\sqrt{2}\,\eta\) & \(\mathcal{S}_{0}\)[dB] \\ \hline XXL & \(10^{-1}\) & \(10^{-4}\) & \(10^{8}\) & \(10^{5}\) & 97 \\ XL & \(10^{-1}\) & \(10^{-3}\) & \(10^{6}\) & \(10^{4}\) & 77 \\ L & \(10^{-1}\) & \(10^{-2}\) & \(10^{4}\) & \(10^{3}\) & 57 \\ \hline \end{tabular}
\end{table}
Table 1: Double-well parameters considered in this paper. Configurations that generate quantum states with squeezing (in variance) of the order of \(\{1,20,40,60,80,100\}\) decibels are defined as _sizes_ {XS, S, M, L, XL, XXL}, respectively.
Figure 2: Coherent quantum dynamics. (a) First order moments (solid line) and classical trajectory (dashed line). (b) Normalized Gaussian motional squeezing and position variance for different double-well potentials. Solid, dashed, and dotted lines correspond to XXL, XL, and L in Table 1. (c) Wigner function for the L set of parameters at different moments of time, ordered clockwise and corresponding to the indicated points in panel (a). Results obtained using the split-step method [37].
of the set of parameters listed in Table 1. The trajectory followed by the expected values is nearly indistinguishable from the phase-space classical trajectory \(X_{c}(t)\) and \(P_{c}(t)=m\dot{X_{c}}(t)\) followed by a particle with the initial condition given by \(X_{c}(0)=X_{\rm s}\) and \(P_{c}(0)=0\), which has an analytical solution [38, 39]. The trajectory is a closed orbit that facilitates the repetition of experimental runs. The orbiting period \(2t_{\rm m}\) can be well approximated by \(\omega t_{\rm m}=\log(4\sqrt{2}D/X_{\rm s})\)[40]. In order to prevent decoherence due to the scattering of gas molecules we require that the probability to scatter a single gas molecule during an experimental run, that is during the orbiting time \(2t_{\rm m}\), is negligible. This condition is given by \(t_{\rm m}\ll t_{\rm gas}/2\), where \(t_{\rm gas}\) is the timescale associated with a single gas scattering event and for a spherical particle of radius \(R\) is given by \(t_{\rm gas}=3\sqrt{m_{\rm gas}k_{\rm B}T_{\rm gas}}/(16\pi\sqrt{2\pi}P_{\rm gas }R^{2})\)[9], where \(m_{\rm gas}\), \(T_{\rm gas}\), and \(P_{\rm gas}\) are the single molecule mass, temperature, and pressure of the gas, respectively. This important requirement can be satisfied in ultra-high vacuum, where \(t_{\rm gas}\) for nanoparticles is of the order of milliseconds.
Figure 2(b) shows the time dependence of the position standard deviation \(\Delta X(t)=[\langle\hat{X}^{2}\rangle(t)-\langle\hat{X}\rangle(t)^{2}]^{1/2}\) and the Gaussian motional squeezing \(\mathcal{S}(t)\)[41]. The quantum dynamics generates a large spatial quantum delocalization and motional squeezing. As one can observe in Fig. 2(b), maximum spatial delocalization is achieved at \(t=t_{D}\), when \(\langle\hat{X}\rangle(t_{D})=D\) and \(\Delta X(t_{D})/X_{\Omega}=\eta\) with
\[\eta=\frac{1}{\sqrt{2}}\frac{\Omega}{\omega}\frac{D}{X_{\rm s}}. \tag{1}\]
The corresponding motional squeezing is given by \(\mathcal{S}(t_{D})=\mathcal{S}_{0}\) where \(\mathcal{S}_{0}=-10\log_{10}(\eta^{-2})\). The parameter \(\eta\) is thus key to quantifying the amount of spatial quantum delocalization and motional squeezing generated during the coherent dynamics in the double-well potential. Table 1 shows how spatial delocalization orders of magnitude larger than the zero-point motion with associated motional squeezing of several tens of decibels can be rapidly generated during the evolution in the double-well potential. This fast generation of motional squeezing is due to the coherent inflation generated by the inverted harmonic term present in the potential [17, 28]. At the turning points, \(t=t_{\rm m}\) and \(t=2t_{\rm m}\), the quantum state recompresses such that the position and momentum fluctuations are of the order of the zero-point motion. The recompression is more effective the wider the size of the double-well potential, namely the larger the value of \(\eta\). These expansion and compression dynamics resembles the loop protocol [14] and facilitates the repetition of an experimental run. In contrast to [14], the generated motional squeezing enhances the effect of the nonlinearities in the double-well potential, thereby preparing quantum non-Gaussian states with Wigner negativities, as we explicitly show in the following.
In Fig. 2(c) we plot the Wigner function at the six relevant instances of time indicated in Fig. 2(a), using the L set of parameters (see Table 1). The Wigner function is represented with phase-space coordinates \(\tilde{X}\) and \(\tilde{P}\) centered at the classical trajectory, namely \(\tilde{X}=X-X_{c}(t)\) and \(\tilde{P}=P-P_{c}(t)\). One can observe that when the quantum state is largely squeezed, the potential does not only rotate the squeezed state in phase space, as the harmonic part of the potential does, but the nonharmonic terms bend the phase-space distribution in a boomeranglike shape, thereby generating Wigner negativities and interference fringes [42, 43]. These boomeranglike states can be well described by a cubic-phase state [44, 15, 45], that is, the state obtained by applying a cubic-phase operator to a squeezed state. At the turning point \(t=t_{\rm m}\), the cubic-phase state is such that an interference pattern in the probability position distribution is obtained, see Fig. 3(a). Using the semianalytical tools derived in [40], one can show that this probability distribution is given by a squared Airy function with a fringe separation between the two first maxima \(X_{\rm f}\) that scales as \(X_{\rm f}/X_{\Omega}\sim(\Omega/\omega)^{2/3}(X_{\Omega}/D)^{1/3}\), for a fixed \(X_{\rm s}/D\). For the set of parameters given in Table 1, \(X_{\rm f}/X_{\Omega}\approx 2.5\) for all cases. Finally, one can also show [40] that the state after one period, that is, at time \(t=2t_{\rm m}\), can be approximated as a quartic-phase state, namely a state obtained by applying a quartic-phase operator to a coherent state.
Before discussing how the generation of these macro
Figure 3: Quantum dynamics in the presence of decoherence. (a) Position probability distribution at \(t=t_{\rm m}\)\(P(X,t_{\rm m})\) for the L set of parameters (Table 1) and with decoherence rates \(\Gamma/\Omega=\{1,4,10,40\}\times 10^{-8}\). Darker lines correspond to higher \(\Gamma/\Omega\). (b) Relative position probability distribution at \(t=t_{\rm m}\)\(\tilde{P}(\tilde{t},t_{\rm m})\) for the same potential. Visibility of the first minimum of \(P(X,t_{\rm m})\) as a function: (c) \(\Gamma/\Omega\) for XXL (solid), XL (dashed), and L (dotted) set of parameters. The horizontal line corresponds to the visibility of the first minimum of \(P(\tilde{r},t_{\rm m})\) as a function of collective \(\Gamma/\Omega\). (d) Initial position imprecision \(\sigma_{\rm s}\) (solid line) and \(\tilde{n}\) (dashed line). Lines in (c) and (d) correspond to the semianalytical treatment [40], all other results are obtained from numerical simulation [36].
scopic quantum states can be certified, let us discuss the impact of noise and decoherence. We emphasize that during the dynamics no laser light is used, and hence decoherence due to recoil heating is absent [16; 18]. In addition, under the regime, \(2t_{\rm m}\ll t_{\rm gas}\), achieved in ultra-high vacuum and fast experimental runs, decoherence due to the scattering of gas molecules is prevented. Hence, the main sources of noise and decoherence will be: (i) Thermal emission from the particle [9; 19], (ii) Fluctuations in the double-well potential, both in amplitude and position [46; 47; 48], (iii) Fluctuating forces acting on the particle (e.g., due to fluctuating electric fields [49; 50]), (iv) Finite phonon number occupation (\(\bar{n}>0\)) and/or initial position imprecision (i.e., different \(X_{\rm s}\) in each experimental run), and (v) Timing imprecision, i.e., different values of the measurement time \(t_{\rm m}\) (or \(2t_{\rm m}\)) in each experimental run. Cases (i)-(iii) can be modeled by calculating the dynamics using the master equation
\[\partial_{t}\hat{\rho}(t)=-\frac{\rm i}{\hbar}[\hat{H},\hat{\rho}(t)]-\frac{ \Gamma}{2X_{\Omega}^{2}}[\hat{X},[\hat{X},\hat{\rho}(t)]], \tag{2}\]
where \(\Gamma=\Gamma_{T}+\Gamma_{P}+\Gamma_{F}\) is the decoherence with contributions from (i), (ii), and (iii). The expression of \(\Gamma_{T}\) can be found in the literature [9; 51; 52] and for the particular case of a silica nanoparticle with internal temperature \(T\) and trap frequency \(\Omega/(2\pi)=100\) kHz is given by \(\Gamma_{T}/\Omega\approx 10^{-10}\times[T/(300~{}{\rm K})]^{6}\)[53]. The expression of \(\Gamma_{P}\) can be obtained by considering the potential fluctuations \([1+\xi_{2}(t)]V(\hat{X}+\xi_{1}(t)X_{\Omega})\), where \(\xi_{j}(t)\) for \(j=1,2\) are dimensionless stochastic Gaussian variables of zero mean and assumed delta-correlated in the relevant timescales, namely \(\langle\xi_{j}(t)\xi_{j}(t^{\prime})\rangle=2\pi S_{j}\delta(t-t^{\prime})\). For weak fluctuations and when the particle is at \(X_{c}\), it experiences a fluctuating force given by \(\xi_{1}(t)X_{\Omega}V^{\prime\prime}(X_{c}(t))+\xi_{2}(t)V^{\prime}(X_{c}(t))\). Since during the closed trajectory one has that \(V^{\prime\prime}(X_{c}(t))<5m\omega^{2}\) and \(V^{\prime}(X_{c}(t))<\sqrt{2}Dm\omega^{2}\), the decoherence rate after ensemble average [40; 54] is upper bounded by
\[\frac{\Gamma_{P}}{\Omega}\leq\frac{2\pi\omega}{4}\left(\frac{\omega}{\Omega} \right)^{3}\left[25S_{1}+2\left(\frac{D}{X_{\Omega}}\right)^{2}S_{2}\right]. \tag{3}\]
Finally, \(\Gamma_{F}\) is obtained by considering a fluctuating force \(F(t)\) (e.g., fluctuating electrostatic force) of zero mean, assumed white in the relevant frequency range with correlations given by \(\langle F(t)F(t^{\prime})\rangle=2\pi S_{F}\delta(t-t^{\prime})\). The associated decoherence rate is given by \(\Gamma_{F}=2\pi X_{\Omega}^{2}S_{F}/\hbar^{2}\). In Fig. 3(c) we show how the visibility of the interference pattern generated at \(t=t_{\rm m}\) [see Fig. 3(a)] depends on \(\Gamma\) for double well sizes defined in Table 1. Using the semianalytical tools of [40], one can show that the effect of decoherence scales with \(\Gamma\eta^{2}/\omega\), and hence the wider the double-well potential, the more motional squeezing is generated, and the more stringent the requirements in \(\Gamma\). From Fig. 3(c) and Table 1 one can rapidly calculate the values of the \(S_{1}\), \(S_{2}\), and \(S_{F}\) that are needed in an experiment to generate a visible interference pattern. Regarding cases (iv) and (v), we define \(\sigma_{\rm s}\) and \(\sigma_{t}\) as the standard deviations of normally distributed random variables that model the error in the initial position of the particle and in the time of the measurement, respectively. In Fig. 3 we plot the visibility of the interference pattern as a function of \(\bar{n}\) and \(\sigma_{\rm s}\). Note that one can tolerate an initial position imprecision of up to \(\sigma_{\rm s}\sim 10X_{\Omega}\) or, equivalently, up to \(\bar{n}\sim 40\) in the initial state. Ground-state cooling is thus not a strict requirement. We have analyzed that timing errors up to \(\sigma_{t}\sim 10^{-2}\omega^{-1}\), which are experimentally feasible, provide a visible interference pattern.
In order to certify the generation of the macroscopic quantum states during the dynamics in the double-well potential, several strategies can be used. The most unambiguous strategy is to measure the position interference pattern generated at \(t=t_{\rm m}\) [see Fig. 3(a)] using an inverted optical potential with harmonic frequency \(\Omega_{\rm i}\). It is known [15; 28; 17] that this technique magnifies the interference pattern without compromising its visibility if the condition \(X_{\rm f}/X_{\Omega}\gg(\Gamma/\Omega_{\rm i})^{1/2}\) is met, where \(\Gamma\) is dominated by optical back-action noise (i.e., recoil heating) [16; 18]. Alternatively, one could consider performing quantum tomography of the state at \(2t_{\rm m}\) and show the preparation of a state with a negative Wigner function. Finally, measuring \(\Delta X(t)\), which is very sensitive to external noise and decoherence [40], and comparing the result to the predicted coherent value [shown in Fig. 2(b)] could be used as a method to certify that the overall dynamics was coherent.
As mentioned in the introduction and further analyzed in [40], one can consider the use of two particles, one evolving in each well in a mirror-symmetric way as illustrated in Fig. 1(d). The probability distribution of the relative distance [55] between the particles at \(t=t_{\rm m}\) also shows an interference pattern, see Fig. 3(b). While the visibility of this interference pattern is not one, even in the absence of noise and decoherence, it will be robust in front of sources of noise and decoherence that are collective, that is, that only affect the center-of-mass motion of the two particles [29]. Examples of this collective noise are fluctuations in the center of the double-well potential (e.g., due to vibrations) as well as the imprecision of the position of the two harmonic traps, whose separation is assumed fixed, with respect to the double-well potential. The latter could be implemented by using a standing-wave optical trap, such that the distance between two trapping points is fixed by the laser wavelength. The standing-wave optical configuration can also be used to make sure that at \(t=t_{\rm m}\) the particle is placed at a point of the standing wave where it experiences an inverted potential, which, as described above, is required to measure the interference pattern. Finally, the joint quantum dynamics of the two particles will be very sensitive to any weak interaction between them, and hence it could be
used to detect weak interacting forces similarly to what is discussed in [30, 14].
To conclude, we have shown how the dynamics of a massive particle in a wide nonharmonic potential can be used to rapidly prepare largely delocalized quantum states and a quantum interference pattern. In essence, our protocol implements an in-trap single particle matter-wave interference experiment (_a la_ double-slit Young's experiment) in a way that circumvents key challenges for large masses, such as repeatability and absence of decoherence due to scattering of gas molecules. While observing macroscopic quantum physics is challenging [17, 14, 15, 56, 57, 58, 59, 60, 9, 14], our results show what is required in terms of noise and decoherence and provide a feasible path to scale up the mass of the objects that could be prepared in a macroscopic quantum superposition state. Our proposal is compatible with current state-of-the-art technology, such as optically trapped dielectric nanoparticles hybridized with electrostatic potentials [32, 33, 34] or magnetically levitated superconducting spheres [61, 62]. Finally, we emphasize that our scheme is scale-free and versatile and could thus be initially tested with single atoms [63, 64, 65, 66, 67], ions [68, 69], Bose-Einstein condensates [70, 71, 72, 73, 74, 75], or even clamped nanomechanical oscillators [76].
MRL and ARC contributed equally to this work. We thank the Q-Xtreme synergy group for fruitful discussions. This research has been supported by the European Research Council (ERC) under the grant agreement No. [951234] (Q-Xtreme ERC-2020-SyG) and by the European Union's Horizon 2020 research and innovation programme under grant agreement No. [863132] (IQLev). PTG was partially supported by the Foundation for Polish Science (FNP).
| **[Please translate the sentence as accurately and naturally as possible]** |
2306.14645 | An almost fail-safe a-posteriori limited high-order CAT scheme | In this paper we blend the high order Compact Approximate Taylor (CAT)
numerical schemes with an a-posteriori Multi-dimensional Optimal Order
Detection (MOOD) paradigm to solve hyperbolic systems of conservation laws in
2D. The resulting scheme presents high accuracy on smooth solutions,
essentially non-oscillatory behavior on irregular ones, and, almost fail-safe
property concerning positivity issues. The numerical results on a set of sanity
test cases and demanding ones are presented assessing the appropriate behavior
of the CAT-MOOD scheme. | E. Macca, R. Loubere, C. Pares, G. Russo | 2023-06-26T12:30:56 | http://arxiv.org/abs/2306.14645v1 | # An almost fail-safe a-posteriori limited high-order CAT scheme
###### Abstract
In this paper we blend the high order Compact Approximate Taylor (CAT) numerical schemes with an _a posteriori_ Multi-dimensional Optimal Order Detection (MOOD) paradigm to solve hyperbolic systems of conservation laws in 2D. The resulting scheme presents high accuracy on smooth solutions, essentially non-oscillatory behavior on irregular ones, and, almost fail-safe property concerning positivity issues. The numerical results on a set of sanity test cases and demanding ones are presented assessing the appropriate behavior of the CAT-MOOD scheme.
keywords: High-order scheme, CAT, MOOD, HLL/HLLC, Rusanov, Hyperbolic system of conservation laws, Hydrodynamics.
## 1 Introduction
Peter Lax and Burton Wendroff have presented their seminal finite difference numerical method more than 80 years ago in [12]. This scheme was designed to solve generic hyperbolic systems of conservation laws. At the core of the Lax-Wendroff (LW) scheme lays the so-called LW procedure which relies on Taylor expanding the solution in time up to second-order of accuracy, then replacing the time derivative by the space derivative according to the governing equations, and, finally, approximating the space derivatives by finite differences. This procedure revealed extremely fruitful. This is still used to design modern numerical schemes, for instance the Approximate Taylor methods introduced by Zorio et al. [30], or, the Compact Approximate Taylor (CAT) family [3; 2; 1; 20], and others.
When dealing with discontinuous solutions which may occur for any hyperbolic system of Partial Differential Equations (PDEs), the key point of most of numerical methods is their ability to dissipate appropriately. In other words extra dissipation must be added. The questions of where, when, and how much dissipation is to be added are of paramount importance to assure that the numerical method can handle smooth flows and discontinuous solutions equally well. Limiting second-order schemes has been achieved with slope or flux limiters relying on maximum principle preservation, or alternative related procedures. Beyond second-order accuracy, the limiting is not anymore a well-agreed subject of research. The most known technique is presumably the ENO/WENO procedure [24] for finite volume (FV) or finite differences (FD) schemes. Most of the limiting techniques rely on blending the first-order scheme/flux/reconstruction with a high-order one, using some _a priori_ sensor to determine where this blending would be appropriate. The
limiting entirely depends on the quality of the _a priori_ sensor which must determine "where to act?" and the amount of blending, i.e. "how much dissipation?" to ensure that the numerical solution is physically and numerically acceptable. Based on this philosophy, high-order CAT schemes have been supplemented with an automatic limiting procedure in [1], called ACAT. However this procedure suffers from the same drawback: it relies on _a priori_ sensors which are difficult to design. Contrarily, in this work we operate a shift and will couple the high-order CAT schemes with the _a posteriori_ MOOD limiting procedure [5; 6; 7]. The philosophical statement in MOOD is that it is easier to observe the inappropriate consequences of using a high-order scheme rather than predicting them with _a priori_ sensors. Consequently within a MOOD loop the unlimited high-order explicit scheme is used for the current time-step to produce a candidate numerical solution which is further tested against detection criteria stating if some cells are invalid. While the valid ones are kept, the invalid ones are recomputed with a low-accurate but more robust scheme, possibly a second-order limited scheme, or, more drastically a robust first order one. The goal of this paper is to provide a proof of concept for the design and validation of a CAT-MOOD scheme for systems of conservation laws.
The rest of this paper is organized as follows. The second section introduces the governing equations. Next, the Lax-Wendroff procedure and the CAT schemes are presented. We re-derive the second-, fourth- and sixth-order accurate versions of CAT schemes and the generic CAT2P scheme, along with their adaptive limiter. The fourth section presents the blending of CAT schemes with the _a posteriori_ MOOD procedure to replace the adaptive limiter. Numerical results are gathered in the fifth section to assess the good behavior of the CAT-MOOD sixth order scheme on smooth or discontinuous solutions. Conclusions and perspectives are finally drawn.
## 2 Governing equations
### 1D linear and non-linear scalar conservation laws
To simplify the description of the numerical methods we also consider the non-linear scalar conservation law on the \(Oxt\)-Cartesian frame
\[u_{t}+\partial_{x}f(u)=0, \tag{1}\]
where \(u=u(x,t):\mathbb{R}\times\mathbb{R}^{+}\rightarrow\mathbb{R}\) denotes the scalar variable, and \(f(u)=f(u(x,t))\) the non-linear flux depending on \(u\). \(u(x,0)=u_{0}(x)\) denotes the IC, while BC are prescribed depending on the test case; for instance periodic ones, Dirichlet or Neumann ones.
(1) represents the generic model of non-linear scalar equation, the simplest one being probably Burgers' equation for which the flux is given by: \(f(u)=u^{2}/2\). (1) also represents the generic model of linear scalar advection equation if \(f(u)=au\) with \(a\in\mathbb{R}\) being the advection velocity.
In the following we denote the partial derivative in time and space with under-script letters as \(u_{t}\equiv\partial_{t}u\) and \(u_{x}\equiv\partial_{x}u\).
### 2D gas-dynamics system of conservation laws
In this paper we focus on hyperbolic systems of conservation laws (Partial Differential Equations, PDEs) in 1D and 2D of the form
\[\partial_{t}\mathbf{U}+\nabla\cdot\mathbb{F}(\mathbf{U})=0, \tag{2}\]
where \(t\in\mathbb{R}^{+}\) represents the time variable, \(\mathbf{x}=(x,y)\in\mathbb{R}^{2}\) the space variable in 2 dimensions. \(\mathbf{U}=\mathbf{U}(\mathbf{x},t)\) is the vector of conserved variables while \(\mathbb{F}(\mathbf{U})=(\mathbf{F}(\mathbf{U}(\mathbf{x},t)),\mathbf{G}( \mathbf{U}(\mathbf{x},t)))^{t}\) is the flux vector. \(\nabla\cdot\) is the divergence operator which allows us to rewrite (2) as
\[\partial_{t}\mathbf{U}+\partial_{x}\mathbf{F}(\mathbf{U})+\partial_{y} \mathbf{G}(\mathbf{U})=\mathbf{0}. \tag{3}\]
In this work we mainly focus on the gas-dynamics system of PDEs (Euler equations) where \(\mathbf{U}=(\rho,\rho u,\rho v,\rho\varepsilon)^{\top}\) with \(\rho\) the density, \(\mathbf{u}=(u,v)\) the velocity vector, \(e=\varepsilon+\frac{1}{2}\|\mathbf{u}\|^{2}\) the total energy per unit mass, and \(\varepsilon\) the internal one. The flux tensor is given by
\[\mathbb{F}(\mathbf{U})=\left(\begin{array}{cc}\mathbf{F}(\mathbf{U})& \mathbf{G}(\mathbf{U})\end{array}\right),\quad\text{with}\quad\mathbf{F}( \mathbf{U})=\left(\begin{array}{c}\rho u\\ \rho u^{2}+p\\ \rho uv\\ (\rho e+p)u\end{array}\right),\quad\mathbf{G}(\mathbf{U})=\left(\begin{array} []{c}\rho v\\ \rho uv\\ \rho v^{2}+p\\ (\rho e+p)v\end{array}\right). \tag{4}\]
The system is closed by an Equation Of State (EOS) which specifies the value of the pressure \(p\) as a function of two thermodynamics variables, for instance of the form \(p=p(\rho,\varepsilon)\). For a polytropic gas we have \(p=(\gamma-1)\rho\varepsilon\) with \(\gamma\) the polytropic constant characterizing the type of gas considered. The sound-speed is given by \(a^{2}=\gamma p/\rho\). The equations in (2) express the conservation of mass, momentum and total energy. An entropy inequality is supplemented to the system of PDEs to ensure that the weak solutions are the entropic ones. This system is hyperbolic with eigenvalues \(\lambda^{-}=\mathbf{u}\cdot\mathbf{n}-a\), \(\lambda^{0}=a\) (multiplicity 2), \(\lambda^{+}=\mathbf{u}\cdot\mathbf{n}+a\) where \(\mathbf{n}\) is a generic unit vector indicating a direction in 2D. It is well known that the physical states all belong to
\[\mathcal{A}=\left\{\mathbf{U}\in\mathbb{R}^{4},\text{ such that }\rho>0,\;p>0 \right\}. \tag{5}\]
The primitive variables are the components of vector \(\mathbf{W}=(\rho,u,v,p)^{\top}\) and are computed from the conservative ones as
\[u=(\rho u)/\rho,\qquad v=(\rho v)/\rho,\qquad p=(\gamma-1)\left((\rho E)- \frac{1}{2}\left((\rho u)^{2}+(\rho v)^{2}\right)/\rho\right). \tag{6}\]
System (2) is further equipped with Initial Conditions (IC) and Boundary Conditions (BC). This system of PDEs is the target one, but a simpler one is also considered in the next section for the sake of simplicity.
### Mesh
In this article we consider logical rectangular meshes in 1D and 2D. The time domain \(\mathcal{T}=[0,T]\) with final time \(T>0\) is split into time intervals \([t^{n},t^{n+1}]\), \(n\in\mathbb{N}\), and time-steps \(\Delta t=t^{n+1}-t^{n}\) subject to a CFL (Courant-Friedrichs-Lewy) condition. The computational domain denoted \(\Omega\) is a segment/rectangle in 1D/2D. In 1D, \(\Omega\) is paved with \(N_{x}\) cells. The generic cell is denoted \(\omega_{i}\) and indexed by a unique label \(1\leq i\leq N_{x}\). Classically we identify the cell end-points by half indexes so that \(\omega_{i}=[x_{i-1/2},x_{i+1/2}]\) and the cell center is given by \(x_{i}=\frac{1}{2}(x_{i+1/2}+x_{i-1/2})\). The size of a cell is given by \(\Delta x_{i}=x_{i+1/2}-x_{i-1/2}\), that we simply denote as \(\Delta x\), since we assume, for simplicity, that the grid is uniform. In 2D, \(\Omega\) is paved with \(N_{x}\times N_{y}\) cells. The generic cell is denoted \(\omega_{i,j}\) and indexed by a double label \(1\leq i\leq N_{x}\) and \(1\leq j\leq N_{y}\). The four vertices of any cell \(\omega_{i,j}\) are \(\textbf{{x}}_{i\pm 1/2,j\pm 1/2}=(x_{i\pm 1/2},y_{j\pm 1/2})\). \(x_{i+1/2}\) represents a vertical mesh line, while \(y_{j+1/2}\) a horizontal one. Accordingly, \(\omega_{i,j}=[x_{i-1/2},x_{i+1/2}]\times[y_{j-1/2},y_{j+1/2}]\), and, the cell center is given by \(\textbf{{x}}_{i,j}=(x_{i},y_{j})=\left(\frac{x_{i+1/2}+x_{i-1/2}}{2},\frac{y_ {j+1/2}+y_{j-1/2}}{2}\right)\). The size of a cell is given by \(\Delta x_{i}\times\Delta y_{j}\) with \(\Delta y_{j}=y_{j+1/2}-y_{j-1/2}\), that we simply denote as \(\Delta x\) and \(\Delta y\), since we shall adopt a uniform mesh throughout the paper. We call an interface or face, the intersection between two neighbor cells, that is a point in 1D and a segment in 2D. The neighbor cells of a generic one are those with a non-empty intersection. A generic cell has two/eight neighbors in 1D/2D on logical rectangular grids. In 2D we make the difference between the four face-neighbors and the four corner-neighbors. A "stencil" in 1D is a collection of \(K>0\) cells surrounding and including the current one, onto which derivatives are approximated.
### Notation
In this article the scheme description is made mostly in 1D for the sake of clarity. The following notations refer to the different type of derivatives or approximations.
* \(u^{(k)}_{i,j}\) is the \(k\)-th time derivative of \(u\) at time \(t^{n}\) in position \(x_{i+j}\), where \(i\) refers to the cell and \(j\) to the position in the stencil. In general, for a scheme of order \(2P,\)\(k=1,\ldots,2P-1\) and \(j=-P+1,\ldots,P.\)
* \(f^{(k)}_{i,j}\) is the \(k\)-th time derivative of \(f(u)\) at time \(t^{n}\) in position \(x_{i+j}\), likewise for \(u\). In general, for a scheme of order \(2P,\)\(k=0,\ldots,2P-1\) and \(j=-P+1,\ldots,P,\) under the assumption that \(f^{(0)}_{i,j}=f(u^{n}_{i+j}).\)
* \(u^{k,n+r}_{i,j}\) is the explicit Taylor expansion of function \(u\) in time truncated to order \(k\), centered at time \(t^{n}\) at distance \(r\Delta t\) in time and at spatial location \(x_{i+j}\). Again \(i\) refers to the cell and \(j\) to the position in the stencil. In general, for a scheme of order \(2P,\)\(k=1,\ldots,2P-1,\) while \(j,r=-P+1,\ldots,P.\)
* \(f^{k,n+r}_{i,j}\) refers to \(f\Big{(}u^{k,n+r}_{i,j}\Big{)}.\)
* The symbol \(*\) will be used to indicate to which index (space or time) the differentiation is applied as illustrated in the equations below for the approximation of space and time derivatives, respectively (see also (21)) \[\partial_{x}^{k}f(x_{i}+q\Delta x,t^{n}) \approx A_{p}^{k,q}(f^{n}_{i,*},\Delta x)=\frac{1}{\Delta x^{k}}\sum_{j= -p+1}^{p}\gamma_{p,j}^{k,q}f^{n}_{i+j},\] (7) \[\partial_{t}^{k}f(x_{i},t^{n}) \approx A_{p}^{k,0}(f^{n,*}_{i},\Delta t)=\frac{1}{\Delta t^{k}}\sum_{r =-p+1}^{p}\gamma_{p,r}^{k,q}f^{n,r}_{i}.\] (8)
From now on, since all the formulas are computed at time \(t=t^{n}\), we avoid the extra index \(n\) on equations like (7) and (8).
Figure 1: Left: Logical rectangular grid and notation used in this paper. Right: illustration of spatial stencil in 1D around cell \(i\) with stencils \(i,j\) with \(-P+1\leq j\leq P\).
## 3 Compact Approximate Taylor (CAT) schemes
The focus of this chapter is the presentation of a family of numerical methods for non-linear systems of conservation law, named Compact Approximate Tayor (CAT) schemes1. This family is based on an approximate Taylor procedure that constitutes a proper generalization of Lax-Wendroff (LW) method, in the sense that it reduces to the standard high-order LW method when the flux is linear. In this section we recall the LW and CAT procedures of second and fourth order.
Footnote 1: For simplicity we will summarize the numerical deduction of the method in the scalar case.
### Lax Wendroff procedure
A scheme of historical as well as practical importance is the celebrated **Lax Wendroff** scheme introduced by Peter Lax and Burton Wendroff in 1960 in [12], and [14; 28; 13; 10; 27]. It has been the most widely adopted scheme for aeronautical applications, up to the end of the 1980s under various forms.2
Footnote 2: Probably the most widely used variant is the two-step Mac Cormack scheme, which has properties similar to LW schemes, and avoids the computation of the second derivative. Published originally at a conference in 1969, the paper has been reproduced in [21].
The original derivation of Lax and Wendroff was based on a Taylor expansion in time of function \(u\) at point \((x_{i},t)\) up to second order of accuracy, thus
\[u(x_{i},t+\Delta t)=u(x_{i},t)+\Delta t\,u_{t}(x_{i},t)+\frac{\Delta t^{2}}{2 }u_{tt}(x_{i},t)+O(\Delta t^{3}), \tag{9}\]
where \(\Delta t>0\) is a small increment in time. The numerical scheme is then obtained by neglecting the higher order term in \(\Delta t\), using the governing equation to replace time derivatives by spatial ones, and then substituting the obtained space derivatives with their finite difference approximations. For the _linear case_, \(u_{t}=-au_{x}\) with \(f(u)=au\), we obtain
\[u(x_{i},t^{n}+\Delta t)=u(x_{i},t^{n})-\Delta t\,au_{x}(x_{i},t^{n})+\frac{ \Delta t^{2}}{2}a^{2}u_{xx}(x_{i},t)+O(\Delta t^{3}). \tag{10}\]
Using centred finite differences to approximate spatial derivatives, the numerical scheme follows as:
\[u_{i}^{n+1}=u_{i}^{n}-\Delta t\,a\,\frac{u_{i+1}^{n}-u_{i-1}^{n}}{2\Delta x}+ \frac{\Delta t^{2}}{2}a^{2}\frac{u_{i+1}^{n}-2u_{i}^{n}+u_{i-1}^{n}}{\Delta x ^{2}}, \tag{11}\]
where \(u_{i}^{m}\) is an approximation of the point value of the solution at position \(x_{i}\) at the time \(t^{m}\). A useful alternative formulation written in conservative form yields
\[u_{i}^{n+1}=u_{i}^{n}-\frac{\Delta t}{\Delta x}\left(F_{i+\frac{1}{2}}^{\rm LW }-F_{i-\frac{1}{2}}^{\rm LW}\right), \tag{12}\]
where the so-called LW numerical flux, \(F_{i+\frac{1}{2}}^{\rm LW}\), is given by
\[F_{i+\frac{1}{2}}^{\rm LW}=\frac{a}{2}\left(u_{i+1}^{n}+u_{i}^{n}\right)- \frac{a^{2}\Delta t}{2\Delta x}\left(u_{i+1}^{n}-u_{i}^{n}\right). \tag{13}\]
The _non-linear case_, \(u_{t}=-f_{x}(u)\), yields
\[u_{i}^{n+1}=u_{i}^{n}-\frac{\Delta t}{2\Delta x}\left(f_{i+1}^{n}-f_{i-1}^{n} \right)+\frac{\Delta t^{2}}{2\Delta x^{2}}\left({\rm A}_{i+\frac{1}{2}}\left( f_{i+1}^{n}-f_{i}^{n}\right)-{\rm A}_{i-\frac{1}{2}}\left(f_{i}^{n}-f_{i-1}^{n} \right)\right), \tag{14}\]
where \(f^{n}_{i+j}=f(u^{n}_{i+j})\) for \(j=-1,0,1\) and A is an approximation of the derivative of \(f\), i.e \(A=\partial f/\partial u\). Hence, \(\mathrm{A}_{i\pm\frac{1}{2}}\) is the approximation derivative of \(f\) evaluated at \(u^{n}_{i+\frac{1}{2}}=\frac{1}{2}(u^{n}_{i}+u^{n}_{i\pm 1})\), that is \(\mathrm{A}_{i\pm\frac{1}{2}}\equiv\mathrm{A}(u^{n}_{i+\frac{1}{2}})\), or, the average between the cell-based derivative, that is \(\mathrm{A}_{i\pm\frac{1}{2}}\equiv\frac{1}{2}\left(\mathrm{A}(u^{n}_{i})+ \mathrm{A}(u^{n}_{i\pm 1})\right)\). Notice that they depend non-linearly on variable \(u\). Moreover they are always evaluated at time \(t^{n}\), so we can omit this time dependency. The alternative conservative formulation of the LW scheme is expressed as:
\[u^{n+1}_{i}=u^{n}_{i}-\frac{\Delta t}{\Delta x}\left(F^{\mathrm{LW}}_{i+\frac{ 1}{2}}-F^{\mathrm{LW}}_{i-\frac{1}{2}}\right), \tag{15}\]
where the numerical flux
\[F^{\mathrm{LW}}_{i+\frac{1}{2}}=\underbrace{\frac{1}{2}\left(f^{n}_{i+1}+f^{ n}_{i}\right)}_{\text{Physical flux}}-\underbrace{\frac{\Delta t}{2\Delta x}A_{i+\frac{1}{2}}\left(f^{n}_{i+1}-f^{ n}_{i}\right)}_{\text{Dissipation}}, \tag{16}\]
is composed of two parts: the average of the physical fluxes at cells \(i\) and \(i+1\), and, the numerical dissipation.
### Compact Approximate Taylor (CAT) procedure
The generalized Lax-Wendroff method is used to update the numerical solution:
\[u^{n+1}_{i}=u^{n}_{i}+\sum_{k=1}^{2P}\frac{(\Delta t)^{k}}{k!}u^{(k)}_{i}, \tag{17}\]
where we recall that \(u^{n}_{i}\) is an approximation of the value of the exact solution \(u(x,t)\) at time \(t^{n}\) at position \(x_{i}\)[9], and, \(u^{(k)}_{i}\) is an approximation of \(\partial^{k}_{t}u(x_{i},t^{n})\). The \(k\)-th derivative in time of \(u\) is computed with a compact and numerical version of the Cauchy-Kovalesky procedure introduced by Carrillo and Pares in [3].
The final expression of the \(2P\)-order CAT method in conservative form is:
\[u^{n+1}_{i}=u^{n}_{i}+\frac{\Delta t}{\Delta x}\left(F^{P}_{i-\frac{1}{2}}-F^{ P}_{i+\frac{1}{2}}\right). \tag{18}\]
Let us introduce the sets \(\mathcal{S}^{P}_{i\pm\frac{1}{2}}\) of values \(u^{n}_{i}\) on stencils centered around interface \(i\pm\frac{1}{2}\) of size \(2P\), that is
\[\mathcal{S}^{P}_{i+\frac{1}{2}}=\left\{u^{n}_{i-P+1},\ldots,u^{n}_{i},u^{n}_{i +1},\ldots u^{n}_{i+P}\right\}. \tag{19}\]
The flux functions \(F^{P}_{i\pm\frac{1}{2}}\) are then computed, respectively, on the sets \(\mathcal{S}^{P}_{i\pm\frac{1}{2}}\), as
\[F^{P}_{i+\frac{1}{2}}=\sum_{k=1}^{2P}\frac{\Delta t^{k-1}}{k!}f^{(k-1)}_{i+ \frac{1}{2}}, \tag{20}\]
and
\[f^{(k-1)}_{i+\frac{1}{2}}=\mathcal{A}^{0,\frac{1}{2}}_{P}\left(f^{(k-1)}_{i, \ast},\Delta x\right),\quad\text{with}\quad\mathcal{A}^{0,\frac{1}{2}}_{P} \left(f^{(k-1)}_{i,\ast},\Delta x\right)=\sum_{p=-P+1}^{P}\gamma^{0,\frac{1}{2 }}_{P,p}\,f^{(k-1)}_{i+p}, \tag{21}\]
where \(\mathcal{A}^{0,\frac{1}{2}}_{P}\) is an interpolation formula of order \(2P-1\) based on \(2P\)-point stencil. In the following we use the index \(i\) for the cell global index, \(j\) for the local position inside the stencil, \(r\) for the Taylor expansion in time, and, \(k/(k)\) to refer to the \(k\)-th time step/\((k)\)th time derivative. For the sake of clarity, we detail the description of the second order (\(P=1\)) CAT2, in the next sub-section, and CAT4 in the Appendix 7.1.
#### 3.2.1 Second order version - CAT2
Let \(p\in\mathbb{N}\) denote an integer such that \(0\leq p\leq P\). In the case \(P=1\) then the relative stencil is simply \(\mathcal{S}^{1}_{i+\frac{1}{2}}=\{u^{n}_{i},u^{n}_{i+1}\}\) for interface \(i+1/2\), while the flux reconstructions are:
\[F^{1}_{i+\frac{1}{2}}=f^{(0)}_{i+\frac{1}{2}}+\frac{\Delta t}{2}f^{(1)}_{i+ \frac{1}{2}},\qquad F^{1}_{i-\frac{1}{2}}=f^{(0)}_{i-\frac{1}{2}}+\frac{ \Delta t}{2}f^{(1)}_{i-\frac{1}{2}}, \tag{22}\]
where \(f^{(0)}_{i+\frac{1}{2}}=\frac{1}{2}\left(f^{n}_{i}+f^{n}_{i+1}\right)\) is the interpolation of the flux at time \(t^{n}\), while \(f^{(1)}_{i+\frac{1}{2}}\) is the interpolation of the first time-derivative of the flux for any interface \(i+1/2\). This implies that \(\gamma^{0,\frac{1}{2}}_{1,p}=\frac{1}{2}\) in (21). These are computed over stencils \(\mathcal{S}^{1}_{i\pm\frac{1}{2}}\) for \(p\in\{0,1\}\) as
\[\mathcal{S}^{1}_{i-\frac{1}{2}}:\;\;f^{(1)}_{i-\frac{1}{2}}=\frac {f^{(1)}_{i-1}+f^{(1)}_{i}}{2},\;\;f^{(1)}_{i-1,p}=\frac{f\left(u^{n}_{i-1+p}+ \Delta t\,u^{(1)}_{i-1,p}\right)-f^{n}_{i-1+p}}{\Delta t},\;\;u^{(1)}_{i-1,p}= -\frac{f^{n}_{i}-f^{n}_{i-1}}{\Delta x},\] \[\mathcal{S}^{1}_{i+\frac{1}{2}}:\;\;f^{(1)}_{i+\frac{1}{2}}=\frac {f^{(1)}_{i}+f^{(1)}_{i+1}}{2},\;\;\;f^{(1)}_{i,p}=\frac{f\left(u^{n}_{i+p}+ \Delta t\,u^{(1)}_{i,p}\right)-f^{n}_{i+p}}{\Delta t},\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;u^ {(1)}_{i,p}=-\frac{f^{n}_{i+1}-f^{n}_{i}}{\Delta x}.\]
Notice that the \(\mathcal{S}^{1}_{i+\frac{1}{2}}\) expressions can be obtained from \(\mathcal{S}^{1}_{i-\frac{1}{2}}\) by replacing \(i\) by \(i+1\).
Finally, the expanded form of the fluxes (22) is given by
\[F^{1}_{i-\frac{1}{2}} =\frac{1}{4}\left(f^{n}_{i-1}+f^{n}_{i}+f\left(u^{n}_{i-1}+\Delta t \,u^{(1)}_{i-1,0}\right)+f\left(u^{n}_{i}+\Delta t\,u^{(1)}_{i-1,1}\right) \right), \tag{23}\] \[F^{1}_{i+\frac{1}{2}} =\frac{1}{4}\left(f^{n}_{i}+f^{n}_{i+1}+f\left(u^{n}_{i}+\Delta t \,u^{(1)}_{i,0}\right)+f\left(u^{n}_{i+1}+\Delta t\,u^{(1)}_{i,1}\right) \right), \tag{24}\]
and the solution is obtained by substituting these fluxes in formula (18).
The computation of the interfacial flux \(F^{1}_{i+\frac{1}{2}}\) can be recast into the algorithm:
1. Compute \(f^{(0)}_{i+\frac{1}{2}}\) adopting an interpolation formula over stencil \(\mathcal{S}^{1}_{i+\frac{1}{2}}\) at time \(t^{n}\).
2. Compute the first derivatives \(u^{(1)}_{i,p}\) in time through the numerical compact Cauchy-Kovalesky procedure, using \(\partial_{t}u=-\partial_{x}f\), with data at time \(t^{n}\).
3. Compute the truncated Taylor expansions: \(u^{1,n+1}_{i,p}=u^{n}_{i+p}+\Delta t\,u^{(1)}_{i,p}\) for \(p=0\) and \(1\).
4. Compute the first time derivatives of the flux using the first difference formulas: \[f^{(1)}_{i,p}=\frac{f\left(u^{1,n+1}_{i,p}\right)-f^{n}_{i+p}}{\Delta t}.\] Step 5: Compute \(f^{(1)}_{i+\frac{1}{2}}\) through \(f^{(1)}_{i,j}\) adopting an interpolation formula on stencil \(\mathcal{S}^{1}_{i+\frac{1}{2}}\);
5. Compute \(F^{1}_{i+\frac{1}{2}}\) as a Taylor expansion: \(F^{1}_{i+\frac{1}{2}}=f^{(0)}_{i+\frac{1}{2}}+\frac{\Delta t}{2}f^{(1)}_{i+ \frac{1}{2}}\) with (22).
#### 3.2.2 CAT2p
Generically, the \(2P\) CAT scheme follows formulation (18) with the interface fluxes given by (20). The expression of the right numerical flux of order \(2P\) is obtained with formula:
\[F^{P}_{i+\frac{1}{2}}=\sum_{k=1}^{2P}\frac{\Delta t^{k-1}}{k!}\mathcal{A}^{0, \frac{4}{2}}_{P}\left(f^{(k-1)}_{i,*},\Delta x\right)=\sum_{k=1}^{2P}\frac{ \Delta t^{k-1}}{k!}\sum_{j=-P+1}^{P}\gamma^{0,\frac{1}{2}}_{P,j}f^{(k-1)}_{i,j}, \tag{25}\]
where the high order time derivatives of the flux are computed following and extending the iterative algorithm presented for CAT2 in the previous section (see also [3; 2; 1] for more details):
1. Define \(f^{(0)}_{i,j}:=f(u^{n}_{i+j})\) for all \(j=-P+1,\ldots,P\);
2. For every \(k=1,\ldots,2P-1\): 1. Compute the \(k\)-th derivative of \(u\) at time step \(t^{n}\) for each position \(x_{i+j}\) with \(j=-P+1,\ldots,P\) through the numerical compact version of the Cauchy-Kovalesky identity (47) as: \[u^{(k)}_{i,j}=-\mathcal{A}^{1,j}_{P}\left(f^{(k-1)}_{i,*},\Delta x\right);\] 2. Compute the Taylor expansion of \(u\) in time truncated to term \(k\) for all positions \(x_{i+j}\) with \(j=-P+1,\ldots,P\) at time \(t^{n+r}\) with \(r=-P+1,\ldots,P\) as: \[u(x_{i+j},t^{n+r})\approx u^{k,n+r}_{i,j}=u^{n}_{i+j}+\sum_{m=1}^{k}\frac{(r \Delta t)^{m}}{m!}u^{(m)}_{i,j};\] 3. Compute the \(k-\)th time derivative of flux for each position \(x_{i+j}\) with \(j=-P+1,\ldots,P\) at time \(t^{n}\) as: \[f^{(k)}_{i,j}=\mathcal{A}^{k,j}_{P}\left(f^{k,*}_{i,j},\Delta t\right),\] where \(f^{k,*}_{i,j}\) means that we are applying the \(\mathcal{A}\) operator in time and in particular we apply the differentiation formula to the set of flux approximations \[f^{k,n-P+1}_{i,j},\ldots,f^{k,n+P}_{i,j},\] in which \(f^{k,n+r}_{i,j}=f\left(u^{k,n+r}_{i,j}\right)\) for all \(j,r=-P+1,\ldots,P\).
**Remark 1**.: _Observe that the computation of the numerical flux \(F^{P}_{i+1/2}\) requires the approximation of \(u\) at the nodes of a space-time grid of size \(2P\times 2P\), represented by \(u^{k,n+r}_{i,j}\), for \(-P+1\leq j,r=\leq P\) (see Figure 17). The approximations of the solution \(u\) at successive times \((n-P+1)\Delta t\),..., \((n-1)\Delta t\) are different from the ones already computed in the previous time steps \(t^{n-P},\ldots,t^{n-1}\) which are \(u^{n-P}_{i+j}\),..., \(u^{n-1}_{i+j}\). In other words, the discretization in time is not based on a multi-step method but on a one-step one. In fact, it can be re-interpreted as a Runge-Kutta method whose stages are \(u^{n+r}_{i,j}\), \(r=-P+1,\ldots,P\), see [3; 2; 1]._
**Remark 2**.: _These approximations are local. Indeed, suppose that \(i_{1}+j_{1}=i_{2}+j_{2}=\ell\), i.e. \(x_{\ell}>0\) belongs to \(\mathcal{S}^{P}_{i_{1}+1/2}\) and \(\mathcal{S}^{P}_{i_{2}+1/2}\) with local coordinates \(j_{1}\) and \(j_{2}\) respectively. Then, \(f^{(k)}_{i_{1},j_{1}}\) and \(f^{(k)}_{i_{2},j_{2}}\) are, in general, two different approximations of \(\partial^{k}_{t}f(u)(x_{\ell},t^{n})\)._
#### 3.2.3 Computational complexity
In this paragraph we estimate the computational complexity for the CAT2\(P\) scheme. The details about the complexity of the algorithm can be found in Appendix 7.2. As a summary, the operation count per cell per time step for CAT2\(P\), \(P>1\), applied to the scalar case is \(3.5(2P)^{3}-1.5(2P)^{2}+(2P)\) flop plus \((2P)^{3}-2(2P)^{2}+(2P)+1\) function evaluations. For CAT2 this gives 24 flop and 3 function evaluations, while for CAT4 we get 204 flop and 37 function evaluations.
### Adaptive limiter for CAT schemes - ACAT schemes
Although the Compact Approximate Taylor (CAT) schemes are linearly stable in the \(L^{2}\)-sense under the usual CFL-1 condition, they may produce bounded oscillations close to the discontinuity of the solution. To avoid these spurious phenomena, an Adaptive _a priori_ shock-capturing technique has been developed in [2] and called ACAT. There, the order of the method is locally adapted to the smoothness of the numerical solution by means of indicators which check the regularity of the data for each temporal step. More specifically, once the approximations of the solution \(u\) at time \(t^{n}\) have been computed, the stencil of the data adopted to actually compute the right flux \(F_{i+1/2}^{P}\) are set to belong to
\[\mathcal{S}_{p}=\{u_{i-p+1}^{n},\ldots,u_{i+p}^{n}\},\quad p=1,\ldots,P.\]
The selected stencil is the one with maximal length among those in which the solution at time \(t^{n}\) is'smooth'. The smoothness is assessed according to some smoothness indicators: \(\psi_{i+1/2}^{p}\), for any \(p=1,\ldots,P\), which are defined as:
\[\psi_{i+1/2}^{p}\approx\left\{\begin{array}{ll}1&\text{ if $u$ is'smooth' in $\mathcal{S}_{p}$},\\ 0&\text{otherwise.}\end{array}\right. \tag{26}\]
For this strategy one needs to define a robust first-order flux reconstruction, for instance Rusanov-, HLL- or HLLC-based flux reconstruction [27; 15]. Next, one can employ a TVD flux-limiter for the second-order flux reconstruction to be combined with CAT2, such as, for instance minmod, van Alabada or superbee [15].
ACAT2The expression of the ACAT2 numerical method (for \(P=1\)) based on a flux limiter (see [15; 16; 27]) is given by
\[u_{i}^{n+1}=u_{i}^{n}+\frac{\Delta t}{\Delta x}\left(F_{i-1/2}^{*}-F_{i+1/2}^{ *}\right), \tag{27}\]
where the fluxes \(F_{i+1/2}^{*}\) are blended as
\[F_{i\pm 1/2}^{*} = \varphi_{i\pm 1/2}^{1}\,F_{i\pm 1/2}^{1}+(1-\varphi_{i\pm 1/2}^{1}) \,F_{i\pm 1/2}^{\text{low}}. \tag{28}\]
\(F_{i\pm 1/2}^{1}\) is the CAT2 flux given by (24)-(23), while \(F_{i\pm 1/2}^{\text{low}}\) is the first order flux reconstruction, and, \(\varphi_{i\pm 1/2}^{1}\) is a switch computed by a flux limiter which verifies
\[\varphi_{i-1/2}^{1}\approx\begin{cases}1&\text{if $\{u_{i-2}^{n},\ldots,u_{i+1}^{ n}\}$ is'smooth',}\\ 0&\text{otherwise,}\end{cases}\quad\begin{aligned} \varphi_{i+1/2}^{1}\approx\begin{cases}1& \text{if $\{u_{i-1}^{n},\ldots,u_{i+2}^{n}\}$ is'smooth',}\\ 0&\text{otherwise.}\end{cases}\end{aligned}\]
For scalar problems, standard flux limiter functions, \(\varphi^{1}(r)\), such as minmod, superbee, van Leer [25; 11], may be used:
\[\varphi_{i+1/2}^{1}=\varphi^{1}(r_{i+1/2}), \tag{29}\]
where
\[r_{i+1/2}=\left\{\begin{array}{ll}r_{i+1/2}^{L}=\frac{u_{i}^{n}-u_{i-1}^{n}} {u_{i+1}^{n}-u_{i}^{n}}&\text{if $a_{i+1/2}>0$},\\ r_{i+1/2}^{R}=\frac{u_{i+2}^{n}-u_{i+1}^{n}}{u_{i+1}^{n}-u_{i}^{n}}&\text{if $a_{ i+1/2}\leq 0$},\end{array}\right.\quad a_{i+1/2}=\left\{\begin{aligned} & \frac{f(u_{i+1}^{n})-f(u_{i}^{n})}{u_{i+1}^{n}-u_{i}^{n}}& \text{if $|u_{i}^{n}-u_{i+1}^{n}|>\varepsilon$},\\ & f^{\prime}(u_{i}^{n})&\text{otherwise,}\end{aligned}\right. \tag{30}\]
where \(a_{i+1/2}\) is an approximation of the wave speed such as Roe's intermediate speed and \(\varepsilon\) is a small number (but not too small to avoid numerical cancellation). An alternative procedure introduced in [27] avoids the computation of an intermediate speed by defining \(\varphi_{i+1/2}^{1}=\min(\varphi^{1}(r_{i+1/2}^{R}),\varphi^{1}(r_{i+1/2}^{L }))\). This strategy is easily extended to systems by computing the flux limiter component by component.
_Smoothness indicators._ The smoothness indicators used in this work have been introduced in [2]. Nonetheless we briefly recall their construction for the sake of completeness. Given a set of the point values \(f_{j}\) of a function \(f\) in the nodes of the stencil \(\mathcal{S}_{i+1/2}^{p}\), \(p\geq 2\) we define \(\psi_{i+1/2}^{p}\) as follows. First define the lateral left (\({}^{L}\)) and right (\({}^{R}\)) weights \(w_{i+1/2}^{p,L/R}\) as
\[w_{i+1/2}^{p,L}:=\sum_{j=-p+1}^{-1}(f_{i+1+j}-f_{i+j})^{2}+\varepsilon,\quad w_{ i+1/2}^{p,R}:=\sum_{j=1}^{p-1}(f_{i+1+j}-f_{i+j})^{2}+\varepsilon, \tag{31}\]
where \(\varepsilon=10^{-8}\) is a small quantity only used to prevent the weights to vanish. Next, using the half harmonic mean, \(w_{i+1/2}^{p}=\frac{w_{i+1/2}^{p,L}w_{i+1/2}^{p,R}}{w_{i+1/2}^{p,L}+w_{i+1/2} ^{p,R}}\), one defines the high order smoothness indicator over stencil \(\mathcal{S}_{p}\) by
\[\psi_{i+1/2}^{p}:=\left(\frac{w_{i+1/2}^{p}}{w_{i+1/2}^{p}+\tau_{i+1/2}^{p}} \right),\quad\text{with}\quad\tau_{i+1/2}^{p}=\left((2p-1)!\sum_{j=-p+1}^{p} \gamma_{p,j}^{2p-1,1/2}\,f_{i+j}^{n}\right)^{2}. \tag{32}\]
These indicators are such that
\[\psi_{i+1/2}^{p}\approx\left\{\begin{array}{ll}1&\text{if $\{f_{j}\}$ are'smooth' in $\mathcal{S}_{i}^{p}$},\\ 0&\text{otherwise},\end{array}\right. \tag{33}\]
see [2] for a precise statement of this property and its proof.
**Remark 3**.: _Observe that, if data in the stencil \(\mathcal{S}_{i}^{p}\) are smooth, then \(w_{i+1/2}^{p,L}=O(\Delta x^{2})\), \(w_{i+1/2}^{p,R}=O(\Delta x^{2})\), and \(\tau_{i+1/2}^{p}=O(\Delta x^{4p})\). Since we adopt the harmonic mean_
\[\frac{1}{w_{i+1/2}^{p}}=\frac{1}{w_{i+1/2}^{p,L}}+\frac{1}{w_{i+1/2}^{p,R}},\]
_then \(w_{i+1/2}^{p}=O(\Delta x^{2})\). As such_
\[\psi_{i+1/2}^{p}=\frac{w_{i+1/2}^{p}}{w_{i+1/2}^{p}+\tau_{i+1/2}^{p}}=\frac{O (\Delta x^{2})}{O(\Delta x^{2})+O(\Delta x^{4p})},\]
_so that \(\psi_{i+1/2}^{p}\) is expected to be close to \(1\). On the other hand, if there is an isolated discontinuity in the stencil then \(\tau_{i+1/2}^{p}=O(1),\) therefore one of the lateral weights is \(O(1)\) and the other \(O(\Delta x^{2})\) so that the harmonic mean implies that \(w_{i+1/2}^{p}=O(\Delta x^{2})\) and thus:_
\[\psi_{i+1/2}^{p}=\frac{w_{i+1/2}^{p}}{w_{i+1/2}^{p}+\tau_{i+1/2}^{p}}=\frac{O (\Delta x^{2})}{O(\Delta x^{2})+O(1)},\]
_so that \(\psi_{i+1/2}^{p}\) is expected to be close to 0._
_ACAT2P schemes._ Using these ingredients, the final expression of the ACAT2\(P\) for \(P>1\) scheme is of the form
\[u_{i}^{n+1}=u_{i}^{n}+\frac{\Delta t}{\Delta x}\left(F_{i-\frac{1}{2}}^{ \mathcal{P}_{i}}-F_{i+\frac{1}{2}}^{\mathcal{P}_{i}}\right), \tag{34}\]
where
\[F_{i\pm 1/2}^{\mathcal{P}_{i}}=\begin{cases}F_{i\pm 1/2}^{*}&\text{if $ \mathcal{P}_{i}=\emptyset$};\\ F_{i\pm 1/2}^{P_{\max}}&\text{otherwise}.\end{cases} \tag{35}\]
Here, \(\mathcal{P}_{i}\) is the set of consecutive indices
\[\mathcal{P}_{i}=\{p\in\{2,\ldots,P\}\text{ s.t. }\psi_{i+1/2}^{p}\approx 1\}, \quad\text{and}\quad p_{\max}=\max(\mathcal{P}_{i}). \tag{36}\]
Moreover \(F_{i+1/2}^{*}\) is the ACAT2 second-order numerical flux given by (28), \(F_{i+1/2}^{p_{\max}}\) is the ACAT2\(p_{\max}\) numerical fluxes defined in (25). If \(\mathcal{P}_{i}\neq\emptyset\), the order of the flux \(F_{i+1/2}^{p_{\max}}\) can range from \(2P\) to \(4\). Throughout the paper we clip \(\psi\) to \(1\) if \(\psi\geq 0.95\).
**Remark 4**: _Notice that in (36) index \(p\) starts from \(2\) since it is not possible to determine the smoothness of the data in the two-point stencil \(\mathcal{S}_{i+1/2}^{1}\)._
### Extension to 2D
In this section we focus on the extension of CAT methods to non-linear two-dimensional systems of hyperbolic conservation laws
\[u_{t}+f(u)_{x}+g(u)_{y}=0. \tag{37}\]
The following multi-index notation will be used:
\[\mathbf{i}=(i_{1},i_{2})\in\mathbb{Z}\times\mathbb{Z},\]
and
\[\mathbf{0}=(0,0),\quad\mathbf{1}=(1,1),\quad\frac{\mathbf{1}}{\mathbf{2}}=( \frac{1}{2},\frac{1}{2}),\quad\mathbf{e}_{1}=(1,0),\quad\mathbf{e}_{2}=(0,1).\]
We consider Cartesian meshes with nodes
\[\mathbf{x_{i}}=(i_{1}\Delta x,i_{2}\Delta y).\]
Using this notation, the general form of the CAT2\(P\) method will be as follows:
\[u_{\mathbf{i}}^{n+1}=u_{\mathbf{i}}^{n}+\frac{\Delta t}{\Delta x}\left[F_{ \mathbf{i}-\frac{1}{2}\mathbf{e}_{1}}^{P}-F_{\mathbf{i}+\frac{1}{2}\mathbf{e }_{1}}^{P}\right]+\frac{\Delta t}{\Delta y}\left[G_{\mathbf{i}-\frac{1}{2} \mathbf{e}_{2}}^{P}-G_{\mathbf{i}+\frac{1}{2}\mathbf{e}_{2}}^{P}\right], \tag{38}\]
where the numerical fluxes \(F_{\mathbf{i}+\frac{1}{2}\mathbf{e}_{1}}^{P}\), \(G_{\mathbf{i}+\frac{1}{2}\mathbf{e}_{2}}^{P}\) will be computed using the values of the numerical solution \(U_{\mathbf{i}}^{n}\) in the \(P^{2}\)-point stencil centered at \(\mathbf{x_{i+\frac{1}{2}}}=((i_{1}+\frac{1}{2})\Delta x,(i_{2}+\frac{1}{2}) \Delta y)\)
\[S_{\mathbf{i}+\frac{1}{2}}^{P}=\{\mathbf{x_{i+j}},\quad\mathbf{j}\in \mathcal{I}_{P}\},\]
where
\[\mathcal{I}_{P}=\{\mathbf{j}=(j_{1},j_{2})\in\mathbb{Z}\times\mathbb{Z},\quad- P+1\leq j_{k}\leq P,\quad k=1,2\}.\]
#### 3.4.1 2d Cat2
In order to show the extension of CAT2\(P\) procedure let us start with the expression of the CAT2. The numerical fluxes are constructed as follows:
\[F_{\mathbf{i}+\frac{1}{2}\mathbf{e}_{1}}^{1}= \frac{1}{4}\left(f_{\mathbf{i},\mathbf{0}}^{1,n+1}+f_{\mathbf{i}, \mathbf{e}_{1}}^{1,n+1}+f_{\mathbf{i}}^{n}+f_{\mathbf{i}+\mathbf{e}_{1}}^{n} \right), \tag{39}\] \[G_{\mathbf{i}+\frac{1}{2}\mathbf{e}_{2}}^{1}= \frac{1}{4}\left(g_{\mathbf{i},\mathbf{0}}^{1,n+1}+g_{\mathbf{i}, \mathbf{e}_{2}}^{1,n+1}+g_{\mathbf{i}}^{n}+g_{\mathbf{i}+\mathbf{e}_{2}}^{n} \right), \tag{40}\]
where
\[f_{\mathbf{i},\mathbf{j}}^{1,n+1}=f\left(u_{\mathbf{i}+\mathbf{j }}^{n}+\Delta t\,u_{\mathbf{i},\mathbf{j}}^{(1)}\right),\] \[g_{\mathbf{i},\mathbf{j}}^{1,n+1}=g\left(u_{\mathbf{i}+\mathbf{j }}^{n}+\Delta t\,u_{\mathbf{i},\mathbf{j}}^{(1)}\right),\]
for \(\mathbf{j}=\mathbf{0},\mathbf{e}_{1}\) in the \(x\) direction, and \(\mathbf{j}=\mathbf{0},\mathbf{e}_{2}\) in the \(y\) direction.
**Remark 5**.: _Despite what happens for the 1D reconstruction, the first time derivative of \(u\), \(u^{(1)}_{\mathbf{i,j}}\), does not coincide in the 2D-grid points. Indeed, observe that \(u^{(1)}_{\mathbf{i,0}}\neq u^{(1)}_{\mathbf{i,e_{1}}}\) and \(u^{(1)}_{\mathbf{i,0}}\neq u^{(1)}_{\mathbf{i,e_{2}}}\)._
_Note that, in the 1D case, \(u^{(1)}_{i,0}=u^{(1)}_{i,1}\)._
Hence, the first time derivatives \(u^{(1)}_{\mathbf{i,j}}\) are so defined:
\[u^{(1)}_{\mathbf{i,0}} =-\frac{1}{\Delta x}\left(f^{n}_{\mathbf{i+e_{1}}}-f^{n}_{\mathbf{ i}}\right)-\frac{1}{\Delta y}\left(g^{n}_{\mathbf{i+e_{2}}}-g^{n}_{\mathbf{i}} \right),\] \[u^{(1)}_{\mathbf{i,e_{1}}} =-\frac{1}{\Delta x}\left(f^{n}_{\mathbf{i+e_{1}}}-f^{n}_{\mathbf{ i}}\right)-\frac{1}{\Delta y}\left(g^{n}_{\mathbf{i+1}}-g^{n}_{\mathbf{i+e_{1}}} \right),\] \[u^{(1)}_{\mathbf{i,e_{2}}} =-\frac{1}{\Delta x}\left(f^{n}_{\mathbf{i+1}}-f^{n}_{\mathbf{i+ e_{2}}}\right)-\frac{1}{\Delta y}\left(g^{n}_{\mathbf{i+e_{2}}}-g^{n}_{\mathbf{i}} \right),\]
where
\[f^{n}_{\mathbf{i+j}}=f(u^{n}_{\mathbf{i+j}}),\quad g^{n}_{\mathbf{i+j}}=g(u^{ n}_{\mathbf{i+j}}),\quad\forall\mathbf{j}.\]
Finally, the 2D CAT2 method is so defined:
\[u^{n+1}_{\mathbf{i}}=u^{n}_{\mathbf{i}}+\frac{\Delta t}{\Delta x}\left[F^{1}_ {\mathbf{i-\frac{1}{2}}\mathbf{e_{1}}}-F^{1}_{\mathbf{i+\frac{1}{2}}\mathbf{ e_{1}}}\right]+\frac{\Delta t}{\Delta y}\left[G^{1}_{\mathbf{i-\frac{1}{2}}\mathbf{e_{2}}}-G^ {1}_{\mathbf{i+\frac{1}{2}}\mathbf{e_{2}}}\right], \tag{41}\]
#### 3.4.2 2d Cat2p
The high order CAT2\(P\) iterative procedure are computed as follows:
1. Define \[f^{(0)}_{\mathbf{i,j}}=f^{n}_{\mathbf{i+j}},\quad g^{(0)}_{\mathbf{i,j}}=g^{n} _{\mathbf{i+j}},\quad\mathbf{j}\in\mathcal{I}_{P}.\]
2. For \(k=2,\ldots,2P\): 1. Compute \[u^{(k-1)}_{\mathbf{i,j}}=-A^{1,j_{1}}_{P}(f^{(k-2)}_{\mathbf{i,(}*,j_{2})}, \Delta x)-A^{1,j_{2}}_{P}(g^{(k-2)}_{\mathbf{i,(}j_{1},*)},\Delta y),\quad \mathbf{j}\in\mathcal{I}_{P}.\] 2. Compute \[f^{k-1,n+r}_{\mathbf{i,j}}=f\left(u^{n}_{\mathbf{i+j}}+\sum_{l=1}^{k-1}\frac{ (r\Delta t)^{l}}{l!}u^{(l)}_{\mathbf{i,j}}\right),\quad\mathbf{j}\in\mathcal{ I}_{P},\,\,\,r=-P+1,\ldots,P.\] 3. Compute \[f^{(k-1)}_{\mathbf{i,j}}=A^{k-1,0}_{P}(f^{k-1,*}_{\mathbf{i,j}},\Delta t), \quad\mathbf{j}\in\mathcal{I}_{P}.\]
3. Compute \[F^{P}_{\mathbf{i+\frac{1}{2}}\mathbf{e_{1}}} = \sum_{k=1}^{2P}\frac{\Delta t^{k-1}}{k!}A^{0,1/2}_{P}(\tilde{f}^{ (k-1)}_{\mathbf{i,(}*,0)},\Delta x),\] (42) \[G^{p}_{\mathbf{i+\frac{1}{2}}\mathbf{e_{2}}} = \sum_{k=1}^{2P}\frac{\Delta t^{k-1}}{k!}A^{0,1/2}_{P}(\tilde{g}^{ (k-1)}_{\mathbf{i,(}0,*)},\Delta y).\] (43)
The notation used for the approximation of the spatial partial derivatives is the following:
\[A^{k,q}_{P}(f_{\mathbf{i,(}*,j_{2})},\Delta x) = \frac{1}{\Delta x^{k}}\sum_{l=-P+1}^{P}\gamma^{k,q}_{P,l}f_{\mathbf{ i,(}l,j_{2})}\] \[A^{k,q}_{P}(g_{\mathbf{i,(}j_{1},*)},\Delta y) = \frac{1}{\Delta y^{k}}\sum_{l=-P+1}^{P}\gamma^{k,q}_{P,l}g_{ \mathbf{i,(}j_{1},l)}\]
**Remark 6**.: _In the last step of the algorithm above the set \(\mathcal{I}_{P}\) can be replaced by its \((2P-1)\)-point subset_
\[\mathcal{I}_{P}^{0}=\{\mathbf{j}=(j_{1},j_{2})\ \ \text{such that}\ \ j_{1}=0\ \text{or}\ j_{2}=0\}\]
_since only the corresponding values of \(\tilde{f}_{\mathbf{i},\mathbf{j}}^{(k-1)}\) are used to compute the numerical fluxes (42) and (43)._
### Discussion
The ACAT numerical method presented in previous sections is exhaustively described in [2; 1] where numerical results are provided. The extension to systems of conservation laws is done component-by-component, and the extension to multi-dimensions relies on a direction-by-direction splitting. However we can point out several defects in the design of such an approach. First of all, the impact of the _a priori_ smoothness indicators is tremendous. Indeed, they must effectively detect the presence of a discontinuity, but also the occurrence of new ones. While the former is achievable with _a priori_ limiters, the latter is usually more difficult. Moreover the smoothness indicators can detect discontinuity only if the mesh is fine enough, otherwise it is hard to distinguish between a discontinuity and a smooth region with a large gradient. Secondly, the smoothness indicator cost drastically increases with \(P\). Moreover, it becomes increasingly more difficult to determine the smoothness of a numerical solution when higher and higher order of accuracy is required. Thirdly, just like any _a priori_ limiter, the current one has a natural tendency to over-estimate and over-react to possible spurious troubles. As such, the nominal order of accuracy on smooth solution is not always achieved, unless the grid is really fine.
These facts have restricted the effective use of ACAT schemes for orders not greater than \(2P=6\). Also the cost of the current version of ACAT2P constitutes a genuine limitation in 2D.
In this work we present an alternative way to couple CAT scheme with an _a posteriori_ limiting technique, called MOOD, and remove some of the previously described defects.
## 4 Cat-Mood
The main objective of this work is to combine the _a posteriori_ shock capturing technique (MOOD) [5] to this family of one-step spatial and temporal reconstructions of high order CAT schemes [18].
The MOOD algorithm evaluates _a posteriori_ the solution of the high-order numerical method using a class of criteria that detect a variety of oscillations, even very small ones. It is therefore possible to ensure that the numerical solution preserves some properties of the exact solutions of the PDE system, such as, for example, positivity, monotonicity and increase of the physical entropy, even in complex cases. In addition, this procedure allows to reduce and eliminate the numerical oscillations introduced by the high order methods in the presence of shocks or large gradients as is the case for the CAT methods.
The basic idea of the MOOD procedure is to apply a high-order method over the entire domain for a time step, then check locally, for each cell \(i\), the behavior of the solution via admissibility criteria. If the solution computed in cell \(i\) at time \(t^{n+1}\) is in accordance with the criteria considered, it is kept, otherwise, it is recomputed with a new numerical method of order lower than the previous one. This operation is repeated until acceptability, or, when the (last) first order scheme is used. This last case occurs when the admissibility criteria fail with any previous reconstruction.
Therefore, the object of this work is to design a cascade of CAT methods in which the order is locally adapted according to _a posteriori_ admissibility criteria thus creating a new family of adaptive CAT methods called CAT-MOOD schemes.
### MOOD admissibility criteria
In this work we select 3 different admissibility criteria [6; 7] which are invoked onto the candidate numerical solution \(\left\{u_{i}^{n+1}\right\}_{1\leq i\leq I}\):
1. _Computer Admissible Detector (CAD)_: This criterion is responsible to detect undefined or unrepresentable quantities, usually not-a-number NaN or infinity quantity due to some division by zero for instance.
2. _Physical Admissible Detector (PAD)_: The second detector is responsible for ensuring the physical validity of the candidate solution. The detector reacts to every negative pressure \(p\) or density \(\rho\) in the computational domain, in compliance with (5), since otherwise the solution will create non-physical sound speeds, imaginary time steps and so on. The physicality here is assessed from the point of view of a fluid flow, which limits the generality of the criteria as far as predicted pressures are concerned. Said differently, this physical admissibility criteria must be adapted to the model of PDEs which is solved.
3. _Numerical Admissible Detector (NAD)_: This criterion corresponds to a relaxed variant of the Discrete Maximum Principle (see [4; 5]) \[\min_{c\in\mathcal{C}_{i}^{P}}(w_{c}^{n})-\delta_{i}\leq w_{i}^{*}\leq\max_{c \in\mathcal{C}_{i}^{P}}(w_{c}^{n})+\delta_{i},\] where \(\mathcal{C}_{i}^{P}=\{-P,\ldots,P\}\) is the local centered stencil of order \(2P\) and \(\delta_{i}\) is a relaxation term to avoid problems in flat region. \(w_{i}^{*}\) is the numerical solution obtained with the scheme of order \(2P\) and \(\delta_{i}\) is set as: \[\delta_{i}=\max\Big{(}\varepsilon_{1},\varepsilon_{2}\left[\max_{c\in \mathcal{C}_{i}^{P}}w_{c}^{n}-\min_{c\in\mathcal{C}_{i}^{P}}w_{c}^{n}\right] \Big{)}.\] (44) Here \(\varepsilon_{1}\) and \(\varepsilon_{2}\) are small dimensional constant. This criterion is responsible for guaranteeing the essentially non oscillatory (ENO) character of the solution; that is, no large and spurious minima or maxima are introduced locally in the solution3. Footnote 3: In the numerical tests of Sec. 5 we compute the relaxed discrete maximum principle only for density \(\rho\) and pressure \(p\). No limiting, on the velocity components, \(u\), \(v\), has been considered.
If a NaN number is detected by CAD in the candidate solution \(w_{i}^{*}\), then this cell is sent back for recomputation right away. Next, if the candidate solution has some positivity issues, then the PAD test is not passed, and, the cell is also invalid. At last the cell enters the NAD criteria to test for possible numerical oscillation. If spurious oscillations have contaminated the candidate solution, \(w_{i}^{*}\), then it will fail NAD. The candidate solution is then recomputed if at least one of the previous criteria ordered into a chain, see figure 2-left, has failed. As a consequence, the MOOD loop drives the code to locally downgrade the order of accuracy by using an auxiliary scheme of lower accuracy.
### CAT scheme with MOOD limiting
In this work we target to reach a maximal 6th order of accuracy on the part of the domain where a smooth solution is present. This can be achieved with CAT6 scheme that we would like to employ as often as possible. On the contrary, for cells presenting a discontinuous solution we plan to rely on a 1st order low accurate but robust scheme, for instance using Rusanov or HLL fluxes, that we would employ only when and where necessary. In between these two extremes, the CAT4 schemes of 4th order of accuracy is inserted and tried when CAT6 fails. If CAT4 scheme also fails,
then CAT2 scheme (2nd order of accuracy) is further tried. As such we build several cascades of schemes of decreasing orders but possibly increasing robustness, see figure 2-right. Notice that, apart from CAT6 and the 1st order parachute scheme, the user can decide if the intermediate schemes (CAT4 and CAT2 here) should be included or not. In the numerical results in section 5 only the CAT2 scheme is employed to spare computing resources. This scheme is referred to as CATMOOD6.
### CATMOOD algorithm
Practically, for the time-step \([t^{n},t^{n+1}]\) and for each cell \(i\), we define a'mask', \(M_{i}^{n}\in\{-1,0,1\}\), such that
\[M_{i}^{n}=\begin{cases}1&\text{if $u_{i}^{*}$ fails at least one criterion,}\\ -1&\text{if $\exists j\in\mathcal{N}_{i}$ s.t $M_{j}^{n}=1$,}\\ 0&\text{otherwise.}\end{cases} \tag{45}\]
where \(\mathcal{N}_{i}\) is the set of direct neighbor cells of cell \(\omega_{i}\). \(M_{i}^{n}=-1\) means that cell \(i\) is the neighbor of an invalid cell.
The algorithm designed for CATMOOD6 scheme is:
1. Let \(\{u_{i}^{n}\}_{1\leq c\leq N_{c}}\) be the numerical solution at \(t=t^{n}\) over the whole domain \(\Omega\).
2. Let \(\{u_{i}^{*}\}_{1\leq c\leq N_{c}}\) be the numerical solution at \(t=t^{n+1}\) of order 6 obtained from CAT6 scheme. Set \(M_{i}^{n}=0\). For all cell \(i\), check if \(u_{i}^{*}\) satisfies all detection criteria. In this case, then \(u_{i}^{n+1}=u_{i}^{*}\) and \(M_{i}^{n}=0\). Otherwise \(M_{i}^{n}=1\) and, if \(M_{j}^{n}=0\) then set \(M_{j}^{n}=-1\) for all \(j\in\mathcal{N}_{i}\).
3. Only for the troubled cell \(i\), i.e \(M_{i}^{n}\neq 0\), recompute \(\{u_{i}^{*}\}\) the numerical solution at time \(t=t^{n+1}\) of order 4 obtained with CAT4 scheme. Check if \(u_{i}^{*}\) satisfies all detection criteria. In this case, then \(u_{i}^{n+1}=u_{i}^{*}\) and \(M_{i}^{n}=0\). Otherwise \(M_{i}^{n}=1\) and, if \(M_{j}^{n}=0\) then set \(M_{j}^{n}=-1\) for all \(j\in\mathcal{N}_{i}\).
4. Only for the troubled cell \(i\), i.e \(M_{i}^{n}\neq 0\), recompute \(\{u_{i}^{*}\}\) the numerical solution at time \(t=t^{n+1}\) of order 2 obtained with CAT2 scheme. Check if \(u_{i}^{*}\) satisfies all detection criteria. In this case, then \(u_{i}^{n+1}=u_{i}^{*}\) and \(M_{i}^{n}=0\). Otherwise \(M_{i}^{n}=1\) and, if \(M_{j}^{n}=0\) then set \(M_{j}^{n}=-1\) for all \(j\in\mathcal{N}_{i}\).
5. Only for the remaining troubled cell \(i\), i.e \(M_{i}^{n}\neq 0\), recompute \(\{u_{i}^{*}\}\) the numerical solution at time \(t=t^{n+1}\) of order 1 obtained with a first order scheme, and set \(u_{i}^{n+1}=u_{i}^{*}\).
For efficiency purposes we may remove the CAT4 step in the cascade.
Figure 2: Left: Detection criteria of the MOOD procedure for a candidate solution \(u_{i}^{*}\). _Computer Admissible Detector (CAD)_, _Physical Admissible Detector (PAD)_ and _Numerical Admissible Detector (PAD)_ — Right: Cascades of CAT schemes used in the MOOD procedure. Starting from the most accurate one, CAT6, downgrading to lower order schemes, and, at last to a 1st order accurate scheme employed to ensure robustness.
### Complexity, cost, convergence, implementation
The unavoidable extra-cost when using a MOOD procedure is the detection of troubled cells with the admissible criteria from section 4.1. However the detection criteria are particularly inexpensive to compute compared to smoothness indicators or sensors used for instance in ACAT scheme. Usually these admissibility criteria have a negligible cost.
The MOOD limiting procedure always converges because the number of cells and schemes are finite. In the worst case scenario the entire solution is computed successively with all CAT2P schemes up to the 1st order accurate solution. In the best case scenario the solution from the unlimited CAT6 scheme is accepted without any correction. Any situation in-between is possible, and generally only few cells need to be recomputed.
The detection criteria are fundamental, they must be designed to ensure that, if the mesh is fine enough, a smooth solution computed by CAT6 scheme does not produce any troubled cell. They must also ensure that, in the vicinity of strong discontinuity, the robust 1st order scheme is regularly employed to avoid spurious oscillations.
Concerning the complexity of CATMOOD, it becomes impossible to estimate _a priori_ the cost because the limiting adapts to the underlying flow and the computed numerical solution. In the best case scenario mentioned above, the cost of CATMOD6 is the one of CAT6, in the worst, the cost of CATMOD6 is the sum of the cost of all schemes in the cascade. Generally one observes that the amount of troubled cells is of the order of \(0-20\%\) of the total number of cells, which makes the MOOD procedure genuinely competitive compared to existing _a priori_ limiters, see [6; 17] and the numerical section in this paper.
### CATMOOD vs LATMOOD
In the context of the LAT methods proposed by D. Zorio et al. [30], it is noteworthy that the number of operations per time step is significantly reduced compared to CAT2\(P\) methods. This reduction is achieved by performing the Taylor expansions only once per point, while CAT2\(P\) requires \(2P\) expansions per point. However, it is important to acknowledge that CAT2\(P\) exhibits superior stability properties compared to LAT.
Consequently, an intriguing question arises: can LATMOOD yield a more efficient approach than CATMOOD? To explore this question, it is important to consider two crucial points:
1. The local computation of numerical fluxes makes CAT more suitable for implementing MOOD. Indeed, unlike CAT schemes, LAT methods compute temporal derivatives in a global manner, which contradicts the local approach of MOOD. Specifically, in CAT methods, once the solution \(u^{*}\) has been computed, if any of the detectors fail, it is sufficient to recompute all local time derivatives of the fluxes with a lower order of accuracy, without causing any issues in the MOOD approach. Contrarily in the case of LAT, since all the approximations are computed in a non local sense (see Remark 2) and since LAT does not use a compact stencil, \(4P\) computations have to be updated for each bad cell and approximation \(u_{i+p}^{(1)}\), \(u_{i+p}^{1,n+1}\), etc., making the scheme computationally inefficient.
2. From our preliminaries tests we notice that the proportion of low order cells substantially increases when LAT is employed instead of CAT.
## 5 Numerical test cases
In this paper only numerical tests related to the two-dimensional Euler equations are considered. Obviously, CATMOOD schemes could be applied to any systems of conservation laws without any restriction. Our methodology of testing relies on several classical and demanding test cases:
1. _Isentropic vortex in motion_. This test measures the ability of the MOOD procedure combined with CAT schemes to achieve the optimal high order for a smooth solution. We also compared CATMOOD6 with unlimited CAT schemes, limited ACAT ones and some first order scheme using Rusanov, HLL and HLLC fluxes.
2. _Sedov Blast wave._ This problem has an exact solution presenting a single cylindrical shock wave followed by an exponential decay [23]. This test is used to check the behaviour of the CATMOOD6 scheme against shocks and to compare CATMOOD6 versus ACAT6.
3. _2D Riemann problems_. We simulate four versions of the four-state 2D Riemann problems [22]. These problems present large smooth regions, unstable shear layers, unaligned contact discontinuities and shock waves, along with complex interaction patches for which no exact solution has yet been derived.
4. _Astrophysical jet_. This intense and demanding test case challenges the positivity and robustness of the CATMOOD6 scheme as it presents an extremely violent jet at Mach 2000 generating a bow shock and unstable shear layers.
The time step is chosen according to
\[\Delta t^{n}=\text{CFL min}\left(\frac{\Delta x}{\lambda_{\max_{x}}^{n}},\frac {\Delta y}{\lambda_{\max_{y}}^{n}}\right),\]
where \(\lambda_{\max_{x}}^{n}\) and \(\lambda_{\max_{y}}^{n}\) represent the maximum, over cells, of the spectral radius of the Jacobian matrices \(\partial\mathbf{F}/\partial u\) and \(\partial\mathbf{G}/\partial u\). Empirically we found that the CFL number should be less or equal to \(0.5\). In our calculations we choose CFL\(=0.4\). In all the numerical examples below (except the one in Section 5.4), the units are chosen in such a way that the order of magnitude of all field quantities is one, therefore we always use the same value for the constants appearing in (44), i.e. \(\varepsilon_{1}=10^{-4}\) and \(\varepsilon_{2}=10^{-3}\), while the value \(\varepsilon=10^{-8}\) has been used in (30).
### Isentropic vortex in motion
The isentropic vortex problem [24] challenges the accuracy of numerical methods since an exact, smooth and analytic solution exists. The computational domain is set to \(\Omega=[-10,10]\times[-10,10]\). The ambient flow is characterized by \(\rho_{\infty}=1.0\), \(u_{\infty}=1.0\), \(v_{\infty}=1.0\) and \(p_{\infty}=1.0\), with a normalized ambient temperature \(T_{\infty}^{*}=1.0\). At the initial time, \(t=0\), onto this ambient flow is superimposed a vortex centered at \((0,0)\) with the following state: \(u=u_{\infty}+\delta u\), \(v=v_{\infty}+\delta v\), \(T^{*}=T_{\infty}^{*}+\delta T^{*}\), where the increments are given by
\[\delta u=-y^{\prime}\frac{\beta}{2\pi}\exp\left(\frac{1-r^{2}}{2}\right),\quad \delta v=x^{\prime}\frac{\beta}{2\pi}\exp\left(\frac{1-r^{2}}{2}\right),\quad \delta T=-\frac{(\gamma-1)\beta^{2}}{8\gamma\pi^{2}}\exp\left(1-r^{2}\right),\]
with \(r=\sqrt{x^{2}+y^{2}}\). The so-called strength of the vortex is set to \(\beta=5.0\) and the initial density is given by \(\rho=\rho_{\infty}\left(T/T_{\infty}\right)^{\frac{1}{\gamma-1}}\). Periodic boundary conditions are prescribed. At final time \(t=t_{\text{final}}=20\) the vortex is back to its original position, and, the final exact solution matches the initial one. Since the solution is smooth, it should be simulated with optimal high accuracy, in other words, the limiting/stabilization procedure employed in the scheme should not have any effect.
We run the isentropic vortex test case with first order Rusanov-flux, HLL, HLLC and sixth order CATMOOD method on successive refined Cartesian meshes going from \(50\times 50\) up to \(400\times 400\) cells. The density at final time is plotted in figure 3-left. The results of the convergence analysis for all the schemes are displayed in table 1 and in figure 3-right. The expected rate of convergence is reached for CAT schemes, while for the first-order schemes the convergence is below the expected order. Ideally the limited 6th-order ACAT6 scheme should produce the same errors as the ones
given by CAT6 scheme, but ACAT6 errors are two orders of magnitude greater. Contrarily, starting at mesh \(200\times 200\), CATMOOD6 errors match the optimal ones from CAT6, and, the nominal 6th order is retrieved. This proves that the _a posteriori_ MOOD limiting avoids spurious intervention
\begin{table}
\begin{tabular}{|c||c c|c c|c c|c c|} \hline \multicolumn{8}{|c|}{**2D Isentropic Vortex in motion - Rate of convergence**} \\ \hline \hline & \multicolumn{2}{c|}{**Rusanov-flux**} & \multicolumn{2}{c|}{**HLL**} & \multicolumn{2}{c|}{**HLLC**} & \multicolumn{2}{c|}{**CATMOOD6**} \\ \(N\) & \(L^{1}\) error & order & \(L^{1}\) error & order & \(L^{1}\) error & order & \(L^{1}\) error & order \\ \hline
50 \(\times\) 50 & 8.44\(\times 10^{-3}\) & — & 8.44\(\times 10^{-3}\) & — & 7.91\(\times 10^{-3}\) & - & 8.48\(\times 10^{-3}\) & — \\
100 \(\times\) 100 & 8.04\(\times 10^{-3}\) & 0.07 & 8.04\(\times 10^{-3}\) & 0.07 & 6.86\(\times 10^{-3}\) & 0.21 & 3.77\(\times 10^{-3}\) & 1.17 \\
200 \(\times\) 200 & 6.68\(\times 10^{-3}\) & 0.27 & 6.67\(\times 10^{-3}\) & 0.27 & 5.31\(\times 10^{-3}\) & 0.37 & 2.40\(\times 10^{-7}\) & 13.94 \\
300 \(\times\) 300 & 5.71\(\times 10^{-3}\) & 0.36 & 5.71\(\times 10^{-3}\) & 0.36 & 4.53\(\times 10^{-3}\) & 0.39 & 2.06\(\times 10^{-8}\) & 6.05 \\
400 \(\times\) 400 & 4.98\(\times 10^{-3}\) & 0.47 & 4.98\(\times 10^{-3}\) & 0.47 & 3.86\(\times 10^{-3}\) & 0.55 & 3.52\(\times 10^{-9}\) & 6.14 \\ & Expected & 1 & Expected & 1 & Expected & 1 & Expected & 6 \\ \hline \hline & \multicolumn{2}{c|}{**CAT2**} & \multicolumn{2}{c|}{**CAT4**} & \multicolumn{2}{c|}{**CAT6**} & \multicolumn{2}{c|}{**ACAT6**} \\ \(N\) & \(L^{1}\) error & order & \(L^{1}\) error & order & \(L^{1}\) error & order & \(L^{1}\) error & order \\ \hline
50 \(\times\) 50 & 7.94\(\times 10^{-3}\) & — & 2.03\(\times 10^{-3}\) & — & 8.46\(\times 10^{-4}\) & - & 8.95\(\times 10^{-3}\) & — \\
100 \(\times\) 100 & 2.55\(\times 10^{-3}\) & 1.64 & 1.42\(\times 10^{-4}\) & 3.83 & 1.56\(\times 10^{-5}\) & 5.76 & 8.28\(\times 10^{-3}\) & 0.11 \\
200 \(\times\) 200 & 6.12\(\times 10^{-4}\) & 2.06 & 8.34\(\times 10^{-6}\) & 4.09 & 2.41\(\times 10^{-7}\) & 6.02 & 8.34\(\times 10^{-5}\) & 9.95 \\
300 \(\times\) 300 & 2.69\(\times 10^{-4}\) & 2.02 & 1.64\(\times 10^{-6}\) & 4.02 & 2.09\(\times 10^{-8}\) & 6.03 & 1.05\(\times 10^{-5}\) & 5.14 \\
400 \(\times\) 400 & 1.52\(\times 10^{-4}\) & 1.99 & 5.16\(\times 10^{-7}\) & 4.01 & 3.68\(\times 10^{-9}\) & 6.03 & 2.48\(\times 10^{-6}\) & 4.93 \\ & Expected & 2 & Expected & 4 & Expected & 6 & Expected & 6 \\ \hline \end{tabular}
\end{table}
Table 1: Isentropic vortex in motion \(L^{1}-\)norm errors on density \(\rho\) between the numerical solution and the exact solution of the isentropic vortex in motion problem at \(t_{\rm final}=20\) on uniform Cartesian mesh.
Figure 3: 2D isentropic vortex in motion — Left: Numerical solution at final time for density with CATMOOD6 with HLLC flux scheme used for the first order method on \(100\times 100\) uniform mesh and CFL= 0.4 — Right: Errors in \(L_{1}\) norms given by all schemes.
if the mesh is fine enough. Table 2 presents the computational costs of various methods for the isentropic vortex test case on a uniform grid with \(300\times 300\) cells, CFL\(=0.4\), final time \(t_{\mathrm{final}}=20\), and periodic boundary conditions. The first row displays the CPU computational costs in seconds for the eight methods, while the second row shows the ratio between the computational cost of each scheme and the one of the Rusanov method. This table emphasizes the superiority of the _a posteriori_ approach over the _a priori_ one. Indeed ACAT6 scheme appears to be not only less accurate than CATMOOD6, but also its execution time is higher.
### Sedov blast wave
The 2D Sedov problem [23] is a cylindrical symmetric explosion. The domain is given by \((x,y)\in[-1.2,1.2]^{2}\) initially filled with perfect gas at rest such as \((\rho^{0},u^{0},v^{0},p^{0},\gamma)=(1,0,0,10^{-13},1.4)\). A total energy of \(E_{total}=0.244816\) is concentrated at the origin [19]. This configuration corresponds to a point-wise symmetric explosion, for which a cylindrical shock front reaches radius \(r=\sqrt{x^{2}+y^{2}}=1\) at \(t_{\mathrm{final}}=1\) with a density peak of \(\rho=6\), see figure 3(b) for an example of a quasi symmetric numerical solution.
Figure 5 presents the scatter plots of numerical density as a function of cell radius using, respectively for the first and second columns either \(100\times 100\) or \(200\times 200\) uniform cells. The tested schemes are the 1st order, ACAT6 and CATMOOD6 using HLL numerical flux. Notice that the unlimited CAT schemes fails in producing a numerical solution due to the presence of a strong shock wave. Because the exact solution has a cylindrical symmetry (see the black line in figure 5), all cells at the same radius should share the same numerical density, if the scheme exactly preserves the cylindrical symmetry. The width of the spread/variance of numerical data somehow measures how good the scheme can preserve this symmetry. As is visible in the figures, the refinement of the grid implies a reduction of the variance for any of the methods. Moreover, the 6th order schemes (ACAT6 and CATMOOD6) seem to produce a sharper shock wave and more accurate results.
Figure 4: Sedov blast wave 5.2. Numerical density obtained with CATMOOD6 with HLLC as first order scheme on the interval \([-1.2,1.2]\times[-1.2,1.2]\) adopting a \(200\times 200\) mesh and CFL\(=0.4\) at time \(t=1\). Projection on the plane O-\(xy\) (left); zoom on the interval \([0,1.2]\times[0,1.2]\) (right).
Importantly CATMOOD6 has a better cylindrical symmetry than ACAT6. Of course no positivity issue is reported for any of these schemes.
### 2D Riemann problems
Here, we consider 2D Riemann problems on the interval \([-1,1]\times[-1,1]\) where zero Neumann conditions are applied, and four initial conditions are set respectively as in Table 3[22]. A \(400\times 400\) mesh is adopted for all schemes and configurations. The final time is set to \(t_{\mathrm{final}}=0.3\) and the CFL to \(0.4\). We run four simulations using CATMOOD6 and the three first order schemes respectively with Rusanov, HLL and HLLC flux functions. The 1st order scheme with Rusanov flux has been adopted as parachute scheme in the CATMOOD6 cascade. In these 2D figures the schemes are respectively ordered from top-left to bottom-right. In addition, in a different set of figures for CATMOOD6 schemes, we plot the percentage of cells updated with CAT6 (top), CAT2 (center) or 1st order Rusanov (bottom) schemes as a function of time-step.
Configuration 3Figure 6 shows a zoom of the numerical densities for the configuration 3 (Table 3). All the schemes capture the same global solution. The first order methods are diffusive even if HLLC scheme performs better. Contrarily, the high order CATMOOD6 clearly improves the sharpness of the shear layers and contacts. Figure 7 exhibits the percentage of cells using CAT6 (top), CAT2 (center) and Rusanov scheme (bottom). We can observe that on average \(95\%\) of the cells are updated with \(6\)th order of accuracy, about \(3-4\%\) with \(2\)nd order or \(1\)st order. One observes that only few cells demand some limiting, and are sent back to \(t^{n}\) for re-computation by the MOOD approach. Consequently, the cost of CATMOOD6 is mainly the one of CAT6.
Configuration 6Likewise for configuration 3, we plot the results in Figure 8 and 9. We again observe that the CATMOOD6 results are far more sharper than the 1st order schemes, with HLLC being the less dissipative scheme of the 1st order ones. Notice that the shear layers along the curves
Figure 5: Sedov blast wave from section 5.2 — Scatter plots of the numerical density obtained with a 1st order scheme HLL, and 6th order ACAT6 and CATMOOD6 with HLL flux on the interval \([-1.2,1.2]\times[-1.2,1.2]\). Top: results for \(100\times 100\) uniform cells. Bottom: results for \(200\times 200\) uniform cells. In black is the exact solution.
are Kelvin-Helmholtz unstable, so that it seems normal to see the occurrences of small vortices. This is a numerical evidence of low dissipation of CATMOOD6. The percentage of troubled cells is of the order of \(3-4\%\) on averaged, again showing that CATMOOD6 has globally the same cost than CAT6 plus \(10\%\) of the cost of a first order scheme.
Configuration 11The numerical solutions for configuration 11 are gathered in Figure 10 and the percentages of troubled cells in 11. It is important to notice that the wave are visibly sharper with the high-order CATMOOD6 as expected. The complex internal pattern is also better captured, avoiding the excessive diffusion that can be seen when any 1st order scheme is employed.
\begin{table}
\begin{tabular}{|l l|l l|} \hline \multicolumn{4}{|c|}{**Configuration 3**} \\ \hline \(\rho_{2}=0.5323\) & \(u_{2}=1.206\) & \(\rho_{1}=1.5\) & \(u_{1}=0\) \\ \(v_{2}=0\) & \(p_{2}=0.3\) & \(v_{1}=0\) & \(p_{1}=1.5\) \\ \hline \(\rho_{3}=0.138\) & \(u_{2}=1.206\) & \(\rho_{4}=0.5323\) & \(u_{4}=0\) \\ \(v_{3}=1.206\) & \(p_{2}=0.029\) & \(v_{4}=1.206\) & \(p_{4}=0.3\) \\ \hline \hline \multicolumn{4}{|c|}{**Configuration 6**} \\ \hline \(\rho_{2}=2\) & \(u_{2}=0.75\) & \(\rho_{1}=1.5\) & \(u_{1}=0.75\) \\ \(v_{2}=0.5\) & \(p_{2}=1\) & \(v_{1}=-0.5\) & \(p_{1}=1\) \\ \hline \(\rho_{3}=1\) & \(u_{2}=-0.75\) & \(\rho_{4}=3\) & \(u_{4}=-0.75\) \\ \(v_{3}=0.5\) & \(p_{2}=1\) & \(v_{4}=-0.5\) & \(p_{4}=1\) \\ \hline \hline \multicolumn{4}{|c|}{**Configuration 11**} \\ \hline \(\rho_{2}=0.5313\) & \(u_{2}=0.8276\) & \(\rho_{1}=1\) & \(u_{1}=0.1\) \\ \(v_{2}=0\) & \(p_{2}=0.4\) & \(v_{1}=0\) & \(p_{1}=1\) \\ \hline \(\rho_{3}=0.8\) & \(u_{2}=0.1\) & \(\rho_{4}=0.5313\) & \(u_{4}=0.1\) \\ \(v_{3}=0\) & \(p_{2}=0.4\) & \(v_{4}=0\) & \(p_{4}=0.4\) \\ \hline \hline \multicolumn{4}{|c|}{**Configuration 17**} \\ \hline \(\rho_{2}=2\) & \(u_{2}=0.\) & \(\rho_{1}=1\) & \(u_{1}=0\) \\ \(v_{2}=-0.3\) & \(p_{2}=1\) & \(v_{1}=-0.4\) & \(p_{1}=1\) \\ \hline \(\rho_{3}=1.0625\) & \(u_{2}=0\) & \(\rho_{4}=0.5197\) & \(u_{4}=0\) \\ \(v_{3}=0.2145\) & \(p_{2}=0.4\) & \(v_{4}=-1.1259\) & \(p_{4}=0.4\) \\ \hline \end{tabular}
\end{table}
Table 3: 2D Riemann problem initial conditions.
Figure 6: 2D Riemann problem from section 5.3 with initial condition configuration 3. Zoom of the numerical solution for density on the interval \([-1,1]\times[-1,1]\) adopting a mesh of \(400\times 400-\)cells and CFL\(=0.4\). Rusanov-flux (a); HLLC (b); and CATMOOD6 with Rusanov flux for the first order method (c).
_Configuration 17_. Figure 12 and Figure 13 presents the numerical results. This test presents a small vortex type of structure along with first and secondary waves emanating from the quadruple point. We can easily observe that CATMOOD6 can capture the vortex and the waves. Contrarily a diffusive scheme would require many more cells to get to this accuracy. Here the HLLC flux is adopted in the first order scheme of the CATMOOD6 scheme. The percentage of troubled cells are of the same order than previously, namely \(97-98\%\) are updated with 6th order of accuracy and \(2\%\) and \(1\%\) with 2nd or 1st orders for CATMOOD6. The extra cost compared to an unlimited CAT6 scheme is therefore acceptable. Table 4 displays, in its first row, the computational cost expressed in seconds for the Configuration 17 concerning Rusanov, HLLC, ACAT6, and CATMOOD6 schemes. Meanwhile, in the second row are presented the ratio between the computational costs with respect to the Rusanov scheme cost. Figure 12 clearly demonstrates the remarkable superiority of the adaptive _a posteriori_ approach CAT+MOOD over the _a priori_ technique ACAT. Despite exhibiting
Figure 8: 2D Riemann problem from section 5.3 with initial condition configuration 6. Zoom of the numerical solution for density on the interval \([-1,1]\times[-1,1]\) adopting a mesh of \(400\times 400-\)cells and CFL= 0.4. Rusanov-flux (a); HLLC (b); and CATMOOD6 with HLLC for the first order method (c).
Figure 7: 2D Riemann problem from section 5.3 with initial condition configuration 3.Percentage of cells updated by CAT6 (top), by CAT2 (center), and by Rusanov (bottom).
similar computational costs the CATMOOD6 numerical solution significantly outperforms the ACAT6 one in terms of accuracy.
\begin{table}
\begin{tabular}{|c||c|c|c|c|} \hline & **Rusanov** & **HLLC** & **ACAT6** & **CATMOOD6** \\ \hline
**CPU (s)** & 25.28 & 34.11 & 14563.40 & 12871.25 \\ \hline
**Ratio** & 1 & 1.35 & 576.08 & 509.15 \\ \hline \end{tabular}
\end{table}
Table 4: 2D Riemann problem configuration 17. First row: CPU computational costs expressed in seconds. Second row: ratio between the computational costs with respect to the Rusanov scheme.
Figure 10: 2D Riemann problem from section 5.3 with initial condition configuration 11. Zoom of the numerical solution for density on the interval \([-1,1]\times[-1,1]\) adopting a mesh of \(400\times 400-\)cells and CFL= 0.4. Rusanov-flux (a); HLLC (b); and CATMOOD6 with HLLC for the first order method (c).
Figure 9: 2D Riemann problem from section 5.3 with initial condition configuration 6. Percentage of cells updated by CAT6 (top), by CAT2 (center), and by HLLC (bottom).
Figure 11: 2D Riemann problem from section 5.3 with initial condition configuration 11. Percentage of cells updated by CAT6 (top), by CAT2 (center), and by HLLC (bottom).
## 6 Conclusion
Figure 12: 2D Riemann problem from section 5.3 with initial condition configuration 17. Zoom of the numerical solution for density on the interval \([-1,1]\times[-1,1]\) adopting a mesh of \(400\times 400-\)cells and CFL= 0.4. Rusanov-flux (a); HLLC (b); ACAT6 (c); and CATMOOD6 (d).
Figure 13: 2D Riemann problem from section 5.3 with initial condition configuration 17. Percentage of cells updated by CAT6 (top), by CAT2 (center), and by HLLC (bottom).
### Mach 2000 astrophysical jet
Let us consider the high Mach number astrophysical jet problem [8; 29]. For this test in the interval \([0,1]\times[-0.25,0.25]\) we consider the following initial conditions:
\[(\rho^{0},u^{0},v^{0},p^{0})=\left\{\begin{array}{ll}(5,800,0,0.4127)&\quad \text{if }x=0\,\text{and}\,y\in[-0.05,0.05],\\ (0.5,0,0,0.4127)&\quad\text{otherwise,}\end{array}\right. \tag{46}\]
where \(\gamma=5/3\).
Initially we have a constant ambient gas at rest except for a segment in the left boundary of the domain for which the gas is 10 times denser and with a high \(x\)-component of velocity, corresponding to a Mach equals to 2000. Then, we impose inflow boundary conditions in this left segment, and outflow conditions otherwise. As such this simulates the penetration of a dense jet at hyper-velocity from a portion of the left boundary. This jet generates a bow shock ahead of the jet and a complex shape of the tip of the jet. Some reference solutions are reported in [8; 29; 26] for instance. Since the Mach number of the jet is extremely high, negative numerical pressure or density could easily appear during the computation, leading to the crash of the program. The simulations are run with CATMOOD6 and some 1st order schemes. The mesh is made of \(300\times 150\) uniform quadrangular cells, and a CFL equals to 0.4 is adopted for a final time \(t_{\text{final}}=0.001\). Figures 14-15 present the
Figure 14: Mach 2000 from section 5.4. Numerical solution for density on the interval \([0,1]\times[-0.25,0.25]\) adopting a mesh of \(300\times 150-\) quadrangular cells and a CFL= 0.4. The Rusanov flux (left); the HLLC (middle); and the CATMOOD6 with HLLC for the first order method (right).
Figure 15: Mach 2000 from section 5.4. Numerical solution in the logarithm scale for density on the interval \([0,1]\times[-0.25,0.25]\) adopting a mesh of \(300\times 150-\)cells and CFL= 0.4. The Rusanov flux (Left); the HLLC (middle); and the CATMOOD6 with HLLC for the first order method (right).
numerical densities, with a logarithmic scale in the latter. The simulations are performed with CATMOOD6, first order Rusanov flux and first order HLLC schemes. The first order scheme for the MOOD cascade uses the HLLC numerical flux. Notice that the ACAT6 scheme fails for this test due to the generation of nonphysical states (negative pressure). Obviously all unlimited CAP2P schemes for \(P\geq 1\) also fail.
The 1st order scheme can capture the bow shock position but the tip and body of the jet are truly diffused, especially if HLLC flux is not employed. A better shape is gained with HLLC type of flux. On the contrary CATMOOD6 is able to capture the complexity of the jet motion, its tip and the unstable lateral shear layers producing secondary waves and patterns into the post-shock region. These phenomena would be totally absent from low-accurate simulations for this mesh resolution, hence the need for truly accurate numerical methods. (They are indeed absent from the first order scheme results.) This phenomenon is even more clear with the logarithmic scale on Figures 15. We would like to emphasize that CATMOOD6 does not have any issue related to positivity, because the parachute scheme is indeed one of the the 1st order schemes which is robust enough. As such CAT+MOOD coupling is an almost 'fail-safe' strategy. This is not an obvious property of high-order methods using classical _a priori_ limiters.
Figure 16 presents the percentage of troubled cells in CATMOOD6 as a function of time-steps. We plot the cells updated by the schemes in the MOOD cascade, that is with the unlimited CAT6 (top), CAT2 (center) and 1st order HLLC-base (bottom) schemes. This is a more advanced unsteady test case compared to the 2D Riemann problems seen in section 5.3 for which the solutions were self-similar. Here, the troubled cell evolution presents two phases: first, for about 500 time-steps the number of untroubled cells decreases linearly up to 85%, and, secondly, this number stagnates for about 250 time-steps. Interestingly, the number of cells updated with CAT2 scheme is of the order 10%, while truly demanding cells updated with the 1st order scheme represent about 6%. These two schemes seem to be useful in our CATMOOD6.
This test case is a single example which validates the CATMOOD6 scheme for extremely demanding simulations. Here, both accuracy and robustness are required.
Figure 16: Mach 2000 from section 5.4 with initial condition (46). Percentage of cells updated by CAT6 (top), by CAT2 (center), and by HLLC (bottom).
## 6 Conclusion and Perspectives
In this paper we have presented an _a posteriori_ way of limiting finite difference CAT2P schemes using MOOD paradigm. We have focused on CAT2P schemes of even orders devoted to solve Euler equations on 2D Cartesian mesh with a maximal order of accuracy 6. CAT2P schemes are nominally of order \(2P\) on smooth solutions. Some extra dissipative mechanism must be supplemented to deal with steep gradients or discontinuous solutions, likewise for any high order scheme. Originally, CAT2P schemes were coupled with an automatic _a priori_ limiter which blends high- and low-order fluxes as in [2]. However the difficulty with _a priori_ limiters is the fact that they must (i) anticipate the occurrence of possible spurious oscillations from data at time \(t^{n}\), (ii) tailor the appropriate amount of dissipation to stabilize the scheme, and, (iii) ensures that a physically admissible numerical solution is always produced. For 2nd order schemes such _a priori_ limiters are available, but for higher orders they are not always performing well either on smooth solutions (lack of accuracy) or to ensure the physical admissibility (lack of robustness). In this work we rely on an _a posteriori_ MOOD paradigm which computes an unlimited high-order candidate solution at time \(t^{n+1}\), and, further detects troubled cells which are recomputed with a lower-order accurate scheme [6]. The detection procedure marks troubled cells according to Physical, Numerical and Computer admissible criteria which are at the core of our definition of an acceptable numerical solution. For a proof of concept, in this work, we have tested the so-called CATMOOD6 scheme based on the cascade of schemes: CAT6\(\rightarrow\)CAT2\(\rightarrow\) 1st, where the last scheme is a first order robust scheme.
We have tested this scheme on a test suite of smooth solutions (isentropic vortex), simple shock waves (cylindrical Sedov blastwave), complex self-similar solutions involving contact, shock and rarefaction interacting waves (four state 2D Riemann problems) and, at last, on an extreme Mach 2000 astrophysical-like jet. CATMOOD6 has passed these tests. It has preserved the optimal accuracy on smooth parts of the solutions, an essentially-non-oscillatory behavior close to steep gradients, and, a physically valid solution. CATMOOD6 has been compared to some 1st order schemes to challenge its robustness and to unlimited CAT2P schemes to challenge its accuracy and cost. We have observed that the percentage of cells updated with the 6th order accurate scheme is in the range \(85-100\%\), leaving only few percents to be re-computed by the low order schemes. As such CATMOOD6 has a total cost \(20\%\) superior to the unlimited CAT6 scheme on smooth solution, and, \(10\%\) less expensive than limited ACAT6 scheme on discontinuous solution (for which ACAT6 does not fail). From our test campaign we have observed that CATMOOD6 presents the robustness of its 1st order scheme, and, the accuracy of its 6th order one where appropriate. The detection procedure being able to sort out troubled cells from valid ones.
Concerning the perspectives, the extension to 3D is solely a question of implementation and testing in a parallel environment. A last, some extensions would be to consider different systems of PDEs, with source terms and well-balanced property, possibly stiff, or, more complex models such as Navier-Stokes equations.
## Acknowledgments
We would like to thank Simone Chiochetti for providing some reference for Sedov problem.
This research has received funding from the European Union's NextGenerationUE - Project: Centro Nazionale HPC, Big Data e Quantum Computing, "Spoke 1" (No. CUP E63C22001000006). E. Macca was partially supported by GNCS No. CUP E53C22001930001 Research Project "Metodi numerici per problemi differenziali multiscala: schemi di alto ordine, ottimizzazione, controllo". E. Macca and G. Russo are members of the INdAM Research group GNCS. The research of C. Pares is part of the research project PDC2022-133663-C21 funded by MCIN/AEI/10.13039/501100011033
end the European Union 'Next GenerationEU/PRTR'and it was partially supported by the Regional Government of Andalusia through the research group FQM-216.
| この論文では、高次コンパクト近似泰勒(CAT)数値計算方法と、a posteriori 多次元最適オーダー検出(MOOD)パラドックスを組み合わせて、2次元における守恒の法則のハイパーボリック系を解きます。この結果、滑らかな解では高い精度が得られ、不規則な解では非振動的な挙動を示し、陽性問題に関するほぼ失敗しない特性も持っています。このCAT-MOODスキームの算術結果を sanity test ケースと厳しいケースのセットで示し、その適切な挙動について評価しています。 |
2310.18941 | The intrinsic geometry determined by the Cauchy problems of the
Camassa-Holm equation | Pseudospherical surfaces determined by Cauchy problems involving the
Camassa-Holm equation are considered herein. We study how global solutions
influence the corresponding surface, as well as we investigate two sorts of
singularities of the metric: the first one is just when the co-frame of dual
form is not linearly independent. The second sort of singularity is that
arising from solutions blowing up. In particular, it is shown that the metric
blows up if and only if the solution breaks in finite time. | Igor Leite Freire | 2023-10-29T09:05:39 | http://arxiv.org/abs/2310.18941v1 | # The intrinsic geometry determined by the Cauchy problems of the Camassa-Holm equation
###### Abstract
Pseudospherical surfaces determined by Cauchy problems involving the Camassa-Holm equation are considered herein. We study how global solutions influence the corresponding surface, as well as we investigate two sorts of singularities of the metric: the first one is just when the co-frame of dual form is not linearly independent. The second sort of singularity is that arising from solutions blowing up. In particular, it is shown taht the metric blows up if and only if the solution breaks in finite time.
**MSC classification 2020:** 35A01, 74G25, 37K40, 35Q51.
**Keywords** Equations describing pseudospherical surfaces \(\cdot\) Geometric analysis \(\cdot\) Existence of metrics \(\cdot\) Blow up of metrics
###### Contents
* 1 Introduction
* 1.1 Novelty of the manuscript
* 1.2 Outline of the manuscript
* 2 Few facts about the CH equation and the geometry determined by its solutions
* 2.1 Wave breaking of solutions
* 2.2 Geometric aspects of the CH equation
* 3 Notation, notions and main results
* 3.1 Sobolev spaces
* 3.2 Intrinsic geometry and PSS
* 3.3 Main results
* 4 Preliminaries
* 4.1 Conserved quantities
* 4.2 Auxiliary and technical results
* 5 Proof of the main results
* 5.1 Proof of theorem 3.1
* 5.2 Proof of theorem 3.2
* 5.3 Proof of Theorem 3.3
* 5.4 Proof of theorem 3.4
* 5.5 Proof of theorem 3.5
* 5.6 Proof of theorem 3.6
* 6 Finite height vs finite time of existence
* 7 Examples
* 8 Discussion
* 9 Conclusion
## 1 Introduction
Chern and Tenenblat introduced the notion of pseudospherical equations [10], connecting certain special partial differential equations (PDEs) with infinitely differentiable two-dimensional Riemannian manifolds. Roughly speaking, an equation is said to describe a pseudospherical surface (PSS equation) if it is a necessary and sufficient condition for the validity of the structure equations determining a surface of Gaussian curvature \(\mathcal{K}=-1\). As a result, solutions of such an equation determines a co-frame for a pseudospherical metric of a pseudospherical surface (PSS) with \(\mathcal{K}=-1\). This concept will be accordingly and timely revisited in the present work.
One of the most well-known equations of this type is the third order equation
\[u_{t}-u_{txx}+3uu_{x}=2u_{x}u_{xx}+uu_{xxx}, \tag{1.0.1}\]
that was deduced by Camassa and Holm [6] as a shallow water model, named after them, and shown to be a PSS equation by Reyes [56]. Amongst a number of features the Camassa-Holm (CH) equation has, we would like to highlight existence of differentiable, but not smooth, solutions breaking at finite time [13]. Throughout this paper, by smooth we mean \(C^{\infty}\).
Subsequent developments and applications of Chern and Tenenblat's ideas have not considered solutions with initial data nor finite regularity. As a result, the solutions examined in [13] are, at first sight, somewhat incompatible with the theory developed in [10]. This incongruity might, and probably does, explain the lack of studies of PSS determined by solutions of the Camassa-Holm equation with finite regularity. Very likely, this is the root of a more general fact, that is the absence of works considering surfaces determined by Cauchy problems involving PSS equations.
Only very recently some light have been shed on problems of the nature mentioned above. In [62] qualitative properties of PSS determined by a certain equation reported in [28], (but discovered in a different context in [52]), was studied in conjunction with Cauchy problems. Despite its innovative approach, such a connection was made by considering smooth solutions.
A step forward was made soon after, in [22], where PSS surfaces determined by periodic Cauchy problems involving the equation studied in [28, 62] was considered. This led the authors to prove the existence of periodic PSS and, more importantly, for the first time a finite regularity geometric problem was considered.
Papers [62, 22] are pioneering studies of PSS in connection with Cauchy problems. However, in view of the qualitative nature of the solutions of the Cauchy problems involved, no solution developing any sort of singularities were considered.
A significant leap has been made in [31], where blowing up solutions of the CH were shown to determined PSS. It was proved that the corresponding metric of the surface can only exist within a strip of finite height in the \((x,t)\) plane and also experiences a blow up. Indeed, perhaps the most remarkable result proved in [31] is that any non-trivial initial datum in a certain Sobolev class will necessarily define a co-frame for a pseudospherical metric for a PSS.
The progress made in [31] would not be possible at no cost: A _sine qua non_ ingredient that enabled the advances carried out in [31] was the reformulation of the notions of PSS determined by an equation and generic solutions.
Despite the significant findings reported in [31], some relevant points remain unclear: in the literature of the CH equation, there are blow up scenarios other than those considered in [31], but we had no more than clues about if they really could always be transferred to the metric and that
being so, how that happens. From a completely different nature, not to say opposite, the results reported in [31] showed that any non-trivial initial datum defines a strip contained on the upper plane where a co-frame can be defined everywhere, but it remains unclear if or when we could extend the strip to the entire upper plane.
### Novelty of the manuscript
While the results reported in [31] showed that any non-trivial initial datum gives rise to a PSS whose dual co-frame is defined on a strip of height \(T>0\), they left many open questions. For example, the blow up mechanisms explored in that reference are not the only ones leading to a breakdown of the solutions. In addition, no attempt has been made to consider the possibility of having \(T=\infty\), nor problems concerning how persistence properties of the solutions may affect the corresponding PSS.
This paper shows that we _may_ have PSS defined on subsets of arbitrary height. Moreover, we additionaly study PSS determined by compactly supported initial conditions, precisely describing the asymptotic behaviour of the metric. Furthermore, other blow up conditions for the metrics are also predicted. More importantly, we prove that the metric blows up if and only if the solution develops wave breaking.
### Outline of the manuscript
In section 2 we revisit some basic and relevant aspects of the CH equation, with main focus on the two-dimensional Riemannian geometry determined by its solutions and open problems regarding its geometric analysis. Next, in section 3 we fix the notation used throughout the manuscript, recall basic notions and state our main results. In section 4 we revisit some useful facts, such as conserved quantities and qualitative results regarding the CH equation, that are widely employed in section 5, where our main results are proved. In section 6 we show that the metric of the surface blows up if and only if the solution breaks in finite time. Some examples illustrating our main results are discussed in section 7, while our discussions and conclusions are presented in sections 8 and 9, respectively.
## 2 Few facts about the CH equation and the geometry determined by its solutions
Despite being primarily deduced as an approximation for the description of waves propagating in shallow water regimes, the equation proved to have several interesting properties related to integrability [6]. If we denote
\[m(x,t):=u(x,t)-u_{xx}(x,t),\]
which is known as momentum [6], then (1.0.1) can be rewritten as an evolution equation for \(m\), namely,
\[m_{t}+2u_{x}m+um_{x}=0. \tag{2.0.1}\]
It was shown in [6] that (2.0.1) has a bi-Hamiltonian structure, having the representations
\[m_{t}=-\mathcal{B}_{1}\frac{\delta\mathcal{H}_{2}}{\delta m}=-\mathcal{B}_{2} \frac{\delta\mathcal{H}_{1}}{\delta m},\]
where
\[\mathcal{B}_{1}(\cdot)=\partial_{x}(1-\partial_{x}^{2})(\cdot),\quad\mathcal{ B}_{2}(\cdot)=\partial_{x}(m\cdot)+m\partial_{x}(\cdot)\]
are the Hamiltonian operators, and the functionals \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\) are
\[\mathcal{H}_{1}=\frac{1}{2}\int_{\mathbb{R}}(u^{2}+u_{x}^{2})dx,\quad\mathcal{H}_ {2}=\frac{1}{2}\int_{\mathbb{R}}(u^{3}+uu_{x}^{2})dx. \tag{2.0.2}\]
As a consequence of its bi-Hamiltonian structure, (2.0.1) has also a recursion operator \(\mathcal{R}=\mathcal{B}_{2}\mathcal{B}_{1}^{-1}\) and infinitely many symmetries as well, being also integrable in this sense. The reader is referred to [54, Chapter 7] or [53] for further details about recursion operators and integrability.
It is still worth of mention that Camassa and Holm showed a Lax formulation [6] for (1.0.1)
\[\psi_{xx}=\Big{(}\frac{1}{4}-\frac{m}{2\lambda}\Big{)}\psi,\ \ \psi_{t}=-(\lambda+u)\psi_{x}+\frac{1}{2}u_{x}\psi \tag{2.0.3}\]
as well as continuous, piecewise soliton like solutions, called peakons. For a review on the Camassa-Holm and related equations, see [25].
### Wave breaking of solutions
Soon after the seminal work [6], the interest and relevance of the equation spread out the field of integrable equations and arrived at the lands of applied analysis. Solutions emanating from Cauchy problems involving initial data in certain Banach spaces were proved to be locally well-posed, being global under additional conditions [12]. Even more interestingly, depending on the slope of the initial datum there exists a finite value \(T>0\) (the lifespan of the solution), such that
\[\liminf_{t\to T}\big{(}\inf_{x\in\mathbb{R}}u_{x}(x,t)\big{)}=-\infty. \tag{2.1.1}\]
This fact was first observed in [6] and its rigorous demonstration and dependence on the initial datum was shown by Constantin and Escher [13], see also the review [23].
The Hamiltonian \(\mathcal{H}_{1}\) in (2.0.2) is equivalent to the square of the Sobolev \(H^{1}(\mathbb{R})-\)norm of the solution, meaning that solutions with enough decaying at infinity (as those emanating from initial data \(u_{0}\in H^{s}(\mathbb{R})\), \(s>3/2\)) remain uniformly bounded by the norm of the initial datum as long as they exist [12, 23, 60].
On the other hand, (2.1.1) says that the first singularity (blow up) of a solution, whether it occurs, is manifested by the non-existence of any lower bound of \(u_{x}\) as \(t\) approaches a finite time \(T\), at least for some point \(x\in\mathbb{R}\)[13, Theorem 4.2]. The sudden steepening shown in (2.1.1), but preserving the shape of the solution, is better known as wave breaking (of \(u\)).
### Geometric aspects of the CH equation
The Camassa-Holm (CH) equation, or its solutions, can also be studied from geometric perspectives [14, 15, 16, 56]. We shall briefly discuss [14, 56] which are the main inspirations for this paper, the first being concerned with infinite dimensional Riemmanian geometry, whereas the latter is concerned with an abstract two-dimensional Riemannian manifold, whose importance for this paper is crucial.
Equation (1.0.1) can be associated with the geometric flow in an infinite dimensional manifold \(\mathcal{D}^{3}(\mathbb{R})\) modelled by a Hilbert space in which we can endow a (weak) Riemannian metric [14]. The geodesics in \(\mathcal{D}^{3}(\mathbb{R})\) can either exist globally [14, Theorem 6.1] or breakdown in finite time [14, Theorems 6.3 and 6.4] and, in particular, geodesics starting, at the identity, with initial velocity corresponding to
initial datum leading to breaking solutions will also develop singularities at finite time [14, Theorem 6.3].
A different geometric perspective for the CH equation was given by Reyes [56], who showed it describes pseudospherical surfaces [56, Theorem 1] _a la_ Chern and Tenenblat [10], e.g. see [7, Definition 2.1].
**Definition 2.1**.: _A pseudospherical surface \((PSS)\) is a two-dimensional Riemannian manifold whose Gaussian curvature is constant and negative._
For now it suffices saying that an equation describes pseudospherical surfaces, or is of the pseudospherical type, henceforth referred as PSS equation, when the equation is the compatibility condition of the structure equations
\[d\omega_{1}=\omega_{3}\wedge\omega_{2},\quad d\omega_{2}=\omega_{1}\wedge \omega_{3},\quad d\omega_{3}=-\mathcal{K}\omega_{1}\wedge\omega_{2}, \tag{2.2.1}\]
for a PSS.
That said, in section 2 we show how from a given solution of the CH equation we can construct intrinsically a two-dimensional manifold having Gaussian curvature \(\mathcal{K}=-1\).
The impossibility of a complete realisation of these surfaces in the three-dimensional space is a consequence of Hilbert theorem, which states one cannot immerse isometrically a complete surface with negative curvature into \(\mathbb{R}^{3}\)[51, page 439], [46]. See also [21, section 5-11] for a proof of the Hilbert theorem. This explains the adjective _abstract_ often used to qualify a PSS.
In his work Reyes showed that if \(u\) is a solution of the CH equation, \(m\) is its corresponding momentum, then the one-forms
\[\omega_{1} = \Big{(}\frac{\lambda}{2}+\frac{1}{2\lambda}-m\Big{)}dx+\Big{(}um +\frac{\lambda}{2}u-\frac{u}{2\lambda}-\frac{1}{2}-\frac{\lambda^{2}}{2}\Big{)}dt,\] \[\omega_{2} = -u_{x}dt, \tag{2.2.2}\] \[\omega_{3} = \Big{(}m+\frac{1}{2\lambda}-\frac{\lambda}{2}\Big{)}dx+\Big{(} \frac{\lambda^{2}}{2}-\frac{1}{2}-\frac{u}{2\lambda}-\frac{\lambda}{2}u-um \Big{)}dt,\]
satisfy (2.2.1), for any \(\lambda\in\mathbb{R}\setminus\{0\}\) and \(\mathcal{K}=-1\). This implies that the domain of the solution \(u\), under certain circumstances, can be endowed with a Riemannian metric \(g=\omega_{1}^{2}+\omega_{2}^{2}\) of a PSS, also known as first fundamental form of the surface. From (2.2.2), the corresponding metric is
\[g = \Big{(}\frac{\lambda}{2}+\frac{1}{2\lambda}-m\Big{)}^{2}dx^{2}+2 \Big{(}\frac{\lambda}{2}+\frac{1}{2\lambda}-m\Big{)}\Big{(}um+\frac{\lambda} {2}u-\frac{u}{2\lambda}-\frac{1}{2}-\frac{\lambda^{2}}{2}\Big{)}dxdt\] \[+ \Big{[}u_{x}^{2}+\Big{(}um+\frac{\lambda}{2}u-\frac{u}{2\lambda} -\frac{1}{2}-\frac{\lambda^{2}}{2}\Big{)}^{2}\Big{]}dt^{2}=:g_{11}dx^{2}+2g_{12 }dxdt+g_{22}dt^{2}.\]
More precisely, the work of Reyes showed that, in fact, the Camassa-Holm equation is geometrically integrable, in the sense that its solutions may describe a one-parameter family of non-trivial pseudospherical surfaces [56, Corollary 1]. This is a reflection that the parameter \(\lambda\) in (2.2.2) cannot be removed under a gauge transformation1
While in [14] the influence of solutions emanating from Cauchy problems is crucial in the study of the existence and formation of singularities of geodesics, this is a point usually not considered in the literature of PSS equations, see [8, 9, 10, 17, 18, 20, 22, 23, 41, 55, 56, 40, 41, 57, 58, 59, 63] and references therein. Moreover, in the study of PSS and PDEs the solutions are assumed to be smooth, very often implicitly, but sometimes clearly mentioned [8, page 2] and [41, page 2].
A smooth solution of a PSS equation leads to smooth one-forms \(\omega_{1},\omega_{2},\omega_{3}\) and then the corresponding first fundamental form will inherit the same regularity. The solutions considered by Constantin [14], on the contrary, are not necessarily \(C^{\infty}\), showing an enormous difference among [14] and [7, 8, 55, 56, 57, 58, 59, 63] in terms of the regularity of the objects considered.
Additionally, in the context of the literature of PDEs and PSS, the problem of uniqueness of solutions is not usually discussed, then the question of whether a given first fundamental form could be associated to one or more solutions of the CH equation has not been yet considered. Therefore, a situation like the one shown in Figure 2 is very likely to happen: how can one study the intrinsic geometry associated to the solutions of the CH equation when our only information is a known curve on the boundary of its graph?
## 3 Notation, notions and main results
Throughout this paper \(u=u(x,t)\) denotes a function depending on the variables \(x\) and \(t\), whose physical meaning, when considering the model (1.0.1), are height of the free surface of water above a flat bottom, space and time, respectively. From a geometric point of view, \(x\) and \(t\) are coordinates of a domain in \(\mathbb{R}^{2}\) in which the function \(u\) is defined. We denote by \(u(x,\cdot)\) and \(u(\cdot,t)\) the functions \(t\mapsto u(x,t)\), for fixed \(x\), and \(x\mapsto u(x,t)\), for fixed \(t\), respectively.
For given two non-empty and connected subsets \(I,J\subseteq\mathbb{R}\), the notation \(u\in C^{0}(I\times J)\) means that \(u=u(x,t)\) is continuous with respect to both variables in \(I\times J\). By \(u_{x}\) or \(\partial_{x}u\) we denote partial derivative of \(u\) with respect to its first argument, while similarly \(u_{t}\) or \(\partial_{t}u\) will denote partial derivative with respect to the second argument. We can also consider higher order derivatives using similar convention.
Figure 1: Graphs of solutions \(u(x,t)=e^{x-ct}\) of the CH equation for different values of \(c\). The curve \(x\mapsto(x,0,e^{x})\), highlighted in red, belongs to the graph of this function for any value of \(c\).
The set of ordered \(n-th\) derivatives of \(u\), \(n\in\mathbb{N}\), is denoted by \(u_{(n)}\). By convention, \(u_{(0)}=u\). Whenever \(u\) and its all derivatives up to order \(k\in\mathbb{N}\cup\{0\}\) are continuous on the domain of \(u\), we then write \(u\in C^{k}\). The sets of smooth functions defined on a domain \(\Omega\subseteq\mathbb{R}^{2}\) is denoted by \(C^{\infty}(\Omega)\).
Given \(n\in\mathbb{N}\), a non-empty set \(I\subseteq\mathbb{R}\) and a Banach space \(X\), we say that \(u\in C^{n}(X,I)\) whenever \(\partial_{x}^{k}u(\cdot,t)\in C^{0}(X,I)\), \(0\leq k\leq n\). Moreover, \(u\in C^{0}(X,I)\) means \(u(\cdot,t)\in X\) and \(\|u\|_{C^{0}}=\sup_{t\in I}\|u(\cdot,t)\|_{X}\).
### Sobolev spaces
The Sobolev spaces \(H^{s}(\mathbb{R})\) and the \(L^{p}(\mathbb{R})\) spaces, \(s\in\mathbb{R}\) and \(1\leq p\leq\infty\), are the most relevant Banach spaces used throughout this work. Familiarity with \(L^{p}(\mathbb{R})\) spaces is presupposed, whereas we opt to revisit some basic facts about Fourier analysis and Sobolev spaces due to their importance for our developments. For further details, see [64, Chapter 4].
The set of smooth rapidly decaying functions (Schwartz space) is denoted by \(\mathcal{S}(\mathbb{R})\), whereas its dual space is denoted by \(\mathcal{S}^{\prime}(\mathbb{R})\). Elements of \(\mathcal{S}(\mathbb{R})\) are called _test functions_, while those lying in its dual are known as _tempered distributions_. The Fourier transform of a test function \(\phi\) is denoted and given, respectively, by \(\hat{\phi}\) and
\[\mathcal{F}(\phi)(\xi)=\hat{\phi}(\xi):=\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R} }\phi(x)e^{-ix\xi}dx,\]
whose inverse is
\[\phi(x)=\mathcal{F}^{-1}(\hat{\phi})(x)=\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R }}\hat{\phi}(\xi)e^{ix\xi}d\xi.\]
The Fourier transform of a tempered distribution \(\psi\), denoted by \(\hat{\psi}\), can be defined throughout the relation \(\langle\phi,\mathcal{F}(\psi)\rangle=\langle\mathcal{F}(\phi),\psi\rangle\).
The _Sobolev space_ of order \(s\in\mathbb{R}\), denoted by \(H^{s}(\mathbb{R})\), is the set of tempered distributions \(f\in\mathcal{S}^{\prime}(\mathbb{R})\) such that \((1+|\xi|^{2})^{s/2}\hat{f}(\xi)\in L^{2}(\mathbb{R})\), that has a natural inner product induced by \(L^{2}(\mathbb{R})\)
\[\langle f,g\rangle_{s}=\int_{\mathbb{R}}(1+\xi^{2})^{s}\hat{f}(\xi)\overline{ \hat{g}(\xi)}d\xi. \tag{3.1.1}\]
We denote by \(\langle\cdot,\cdot\rangle_{s}\) and \(\|\cdot\|_{s}\), \(s\in\mathbb{R}\), the inner product in \(H^{s}(\mathbb{R})\) and its induced norm, respectively, whereas by \(\|\cdot\|_{L^{p}(\mathbb{R})}\) we denote the norm in the \(L^{p}(\mathbb{R})\) space, for finite \(p\), and \(\|\cdot\|_{\infty}\) otherwise. In particular, \(\mathcal{S}(\mathbb{R})\subset H^{s}(\mathbb{R})\subset H^{t}(\mathbb{R}) \subset\mathcal{S}^{\prime}(\mathbb{R})\), for any \(s\geq t\).
The following is a cornerstone result for our developments.
**Lemma 3.1**.: (Sobolev Embedding Theorem, [64, Proposition 1.2, page 317]) _If \(s>1/2\), then each \(u\in H^{s}(\mathbb{R})\) is bounded and continuous. In addition, if \(s>1/2+k\), \(k\in\mathbb{N}\), then \(H^{s}(\mathbb{R})\subseteq C^{k}(\mathbb{R})\cap L^{\infty}(\mathbb{R})\)._
As we will soon see, the natural Sobolev space for our purposes is precisely \(H^{4}(\mathbb{R})\), which, in view of the precedent result, is embedded into \(C^{3}(\mathbb{R})\cap L^{\infty}(\mathbb{R})\).
Let us recall the isomorphism \(\Lambda^{s}:H^{\sigma}\to H^{\sigma-s}\), with \(s,\;\sigma\in\mathbb{R}\), given by \(\Lambda^{s}f:=\mathcal{F}^{-1}((1+\xi^{2})^{s/2}\hat{f})\). For us, the most relevant members of this family are just \(s=2\) and its inverse. For this reason we
shall pay a more detailed attention to the operators \(\Lambda^{2}:H^{4}(\mathbb{R})\to H^{2}(\mathbb{R})\) and \(\Lambda^{-2}:H^{2}(\mathbb{R})\to H^{4}(\mathbb{R})\).
It is a well known property of the Fourier transform that \(\mathcal{F}(f^{(n)})(\xi)=(i\xi)^{n}\hat{f}(\xi)\), where we assume \(f\in C^{n}(\mathbb{R})\). Moreover, in view of linearity, we have
\[(\Lambda^{2}(f))(x)=(\mathcal{F}^{-1}((1+\xi^{2})\hat{f})(x)=(\mathcal{F}^{-1 }(\hat{f}))(x)-(\mathcal{F}^{-1}(-\xi^{2}\hat{f}))(x)=((1-\partial_{x}^{2})f)( x),\]
and then, \(\Lambda^{2}=1-\partial_{x}^{2}\).
On the other hand, let us define \(h\) by \(\hat{h}(\xi)=\hat{g}(\xi)\hat{f}(\xi)\). Then \((\mathcal{F}(fg))(\xi)=\sqrt{2\pi}\hat{h}(\xi)\) and \(h(x)=(g*f)(x)\), where \(*\) denotes the usual convolution between two functions. In particular, if we consider
\[g(x)=\frac{e^{-|x|}}{2}, \tag{3.1.2}\]
we have
\[(\mathcal{F}(g))(\xi)=\frac{1}{\sqrt{2\pi}}\frac{1}{1+\xi^{2}},\]
and then,
\[(\Lambda^{-2}(f))(x)=\mathcal{F}^{-1}((1+\xi^{2})^{-1}\hat{f})(x)=\sqrt{2\pi} (\mathcal{F}^{-1}(\hat{g}\hat{f}))(x)=(g*f)(x).\]
In view of the comments above, given a function \(u_{0}\in H^{s}(\mathbb{R})\), then it uniquely defines another function \(m_{0}(x)=\Lambda^{2}(u_{0})=u_{0}(x)-u_{0}^{\prime\prime}(x)\) and vice-versa
\[u_{0}(x)=(\Lambda^{-2}m_{0})(y)=\frac{1}{2}\int_{\mathbb{R}}e^{-|x-y|}m_{0}(y )dy.\]
Another frequent operator seen in this paper is
\[\partial_{x}\Lambda^{-2}=(\partial_{x}g)(x)=-\frac{\text{sgn}\,(x)}{2}e^{-|x|}, \tag{3.1.3}\]
that acts on \(f\) through the formula \((\partial_{x}\Lambda^{-2}(f))(x)=-\frac{1}{2}(\text{sgn}\,(\cdot)e^{-|\cdot|}* f(\cdot))(x)\).
### Intrinsic geometry and PSS
Let \(\mathbb{E}\) be the usual three-dimensional euclidean space, with canonical inner product \(\langle\cdot,\cdot\rangle\) and \(\mathcal{M}\subseteq\mathbb{E}\) be an open, non-empty set, which we shall henceforth identify with a surface. A one-form \(\omega=f(x,t)dx+g(x,t)dt\) defined on \(\mathcal{M}\) is said to be of class \(C^{k}\) if and only if its coefficients \(f\) and \(g\) are \(C^{k}\) functions.
We say that a triad of \(C^{k}\) one forms \(\{\omega_{1},\omega_{2},\omega_{3}\}\) endows \(\mathcal{M}\) with a PSS structure with Gaussian curvature \(\mathcal{K}=-1\), if \(\{\omega_{1},\omega_{2}\}\) is linearly independent, that is expressed through the condition \(\omega_{1}\wedge\omega_{2}\big{|}_{\mathcal{M}}\neq 0\), and the following equations
\[d\omega_{1}=\omega_{3}\wedge\omega_{2},\quad d\omega_{2}=\omega_{1}\wedge \omega_{3},\quad d\omega_{3}=\omega_{1}\wedge\omega_{2} \tag{3.2.1}\]
are satisfied.
The form \(\omega_{3}\) is called _Levi-Civita connection_ and it is completely determined by the other two one-forms [51, Lemma 5.1, page 289], as well as the Gaussian curvature of \(\mathcal{M}\)[51, Theorem 2.1, page 329]. Since the forms \(\omega_{1},\omega_{2}\), for each point \(p\in\mathcal{M}\), are dual elements of the basis of the corresponding tangent space, then they are intrinsic objects associated to the surface, as well as any other geometry object described only by them.
**Definition 3.1**.: _Let \(\omega_{1}\) and \(\omega_{2}\) be given one-forms on a surface \(\mathcal{M}\) in \(\mathbb{E}\), such that \(\{\omega_{1},\omega_{2}\}\) is LI, and \(p\in\mathcal{M}\). The first fundamental form of \(\mathcal{M}\) is defined, on each each tangent space \(T_{p}\mathcal{M}\) and for any \(v\in T_{p}\mathcal{M}\), by \(I(v)=\omega_{1}(v)^{2}+\omega_{2}(v)^{2}\)._
Using the convention \(\alpha\beta=\alpha\otimes\beta\) and \(\alpha^{2}=\alpha\alpha\), for any one-forms \(\alpha\) and \(\beta\), we can rewrite the first fundamental form as
\[I=\omega_{1}^{2}+\omega_{2}^{2}. \tag{3.2.2}\]
### Main results
Let us now introduce important and sensitive notions for our main purposes.
**Definition 3.2**.: _Let \(u=u(x,t)\); \(\Omega\subseteq\mathbb{R}^{2}\) a non-empty, open and simply connected set, and consider a differential equation for \(u\). A function \(v:\Omega\to\mathbb{R}\) is said to be a classical, or strong, solution for an equation_
\[\mathcal{E}(x,t,u,u_{(1)},\cdots,u_{(n)})=0, \tag{3.3.1}\]
_if:_
* \(v\) _possesses as many as continuous derivatives (pure or mixed) to make the equation well defined;_
* \(v\) _satisfies the equation pointwise, that is_ \[\mathcal{E}(x,t,u,u_{(1)},\cdots,u_{(n)})\Big{|}_{u=v}\equiv 0.\]
_In addition, we say that \(u\) is a strong solution of (3.3.1) subject to an initial condition \(u\big{|}_{X}=u_{0}\) if \(u\) is a solution as previously described and \(u\in C^{0}(X\cup\Omega)\)._
**Example 3.1**.: _Since the CH equation (1.0.1) has the terms \(u_{t}\), \(u_{x}\), \(u_{xx}\), \(u_{txx}\) and \(u_{xxx}\), then any strong solution defined on an open set \(\Omega\subseteq\mathbb{R}^{2}\) has to have these derivatives continuous._
**Definition 3.3**.: _In view of the Example 3.1, the set of functions defined on a set \(\Omega\subseteq\mathbb{R}^{2}\) for which \(u\), \(u_{t}\), \(u_{x}\), \(u_{xx}\), \(u_{txx}\) and \(u_{xxx}\) are all continuous is denoted by \(C^{3,1}(\Omega)\)._
In the class of Sobolev spaces (with a suitable order) we can see the CH equation as a non-local evolution equation, or a dynamical system, [12, 13, 14, 60], and a straightforward calculation shows that if \(v\in C^{0}(H^{4}(\mathbb{R}),[0,T))\cap C^{1}(H^{3}(\mathbb{R}),[0,T))\), then \(v\in C^{3,1}(\mathbb{R}\times(0,T))\subseteq C^{1}(\mathbb{R}\times(0,T))\), and
\[v_{t}-v_{txx}+3vv_{x}-2v_{x}v_{xx}+vv_{xxx}=(1-\partial_{x}^{2})\Big{(}v_{t}+ vv_{x}+\partial_{x}\Lambda^{-2}\Big{(}v^{2}+\frac{v_{x}^{2}}{2}\Big{)}\Big{)}. \tag{3.3.2}\]
Suppose that \(v\) is a solution of the CH equation (1.0.1). Then \(v\) is a solution of the non-local (first order) evolution equation
\[u_{t}+uu_{x}+\partial_{x}\Lambda^{-2}\Big{(}u^{2}+\frac{u_{x}^{2}}{2}\Big{)}=0. \tag{3.3.3}\]
Conversely, assuming that \(v\in C^{0}(H^{4}(\mathbb{R}),[0,T))\cap C^{1}(H^{3}(\mathbb{R}),[0,T))\) is a solution of (3.3.3), then (3.3.2) tells us that \(v\) is a solution of (1.0.1).
**Example 3.2**.: _If we consider the non-local form of the CH equation (3.3.3), then any strong solution \(v:\Omega\to\mathbb{R}\) belongs to \(C^{1}(\Omega)\)._
**Example 3.3**.: _As previously mentioned, the sets of solutions for (3.3.3) and (1.0.1) are not the same, being any \(C^{3,1}\) solution of the latter a solution of the former. The converse, however, is not true._
_Consider the function \(u(x,t)=e^{x+t}\). A straightforward inspection shows that it is a solution of (1.0.1) in the sense of definition 3.2, but \(u(\cdot,t)\notin L^{2}(\mathbb{R})\), for any \(t\), meaning that the convolutions involving \(u^{2}\) and \(u_{x}^{2}\) and (3.1.3) are not defined._
The above examples show that a solution for (3.3.3) is not necessarily a solution of the (1.0.1), although they agree for solutions belonging to \(H^{s}(\mathbb{R})\) for \(s\) sufficiently large.
The observations made above are well known facts in the literature of the CH equation, but in view of their importance in the development of this manuscript, we want to give them the needed attention.
**Proposition 3.1**.: _Let \(u\in C^{0}(H^{4}(\mathbb{R}),[0,T))\cap C^{1}(H^{3}(\mathbb{R}),[0,T))\). Then \(u\) is a classical solution of the CH equation (1.0.1) if and only if \(u\) is a classical solution of the non-local equation (3.3.3). Moreover, in such a class, the Cauchy problem_
\[\left\{\begin{array}{l}m_{t}+2u_{x}m+um_{x}=0,\\ \\ u(x,0)=u_{0}(x)\end{array}\right. \tag{3.3.4}\]
_is equivalent to_
\[\left\{\begin{array}{l}u_{t}+uu_{x}+\partial_{x}\Lambda^{-2}\Big{(}u^{2}+ \frac{u_{x}^{2}}{2}\Big{)}=0,\\ \\ u(x,0)=u_{0}(x).\end{array}\right. \tag{3.3.5}\]
In other words, proposition 3.1 says that (1.0.1) and (3.3.3) are the same object in the class \(C^{0}(H^{4}(\mathbb{R}),[0,T))\cap C^{1}(H^{3}(\mathbb{R}),[0,T))\).
The Cauchy problem (3.3.5) is more convenient to address the questions raised in the Introduction. In fact, in view of the tools developed by Kato [43], we can establish the existence and uniqueness of a solution \(u\in\mathcal{B}:=C^{0}(H^{s}(\mathbb{R}),[0,T))\cap C^{1}(H^{s-1}(\mathbb{R} ),[0,T))\), \(s>3/2\), for (3.3.5) emanating from an initial datum \(u_{0}\in H^{s}(\mathbb{R})\)[60, Theorem 3.2]. While any function in \(\mathcal{B}\) is \(C^{1}\) with respect to \(t\), its regularity regarding \(x\) is controlled by \(s\). Therefore, taking \(s\) sufficiently large we can reach to a higher regularity of the solution with respect to \(x\), making it also a solution for (3.3.4). See also [30].
It is time to drive back to PSS equations. As we have already pointed out, we must observe that several notions in this field were introduced, and have been used assuming, implicitly or explicitly, smooth solutions see [59, Definition 2.4], [7, page 89][42, page 89], and [8, page 2] and [41, page 2], respectively. On the other hand, our paper aims at seeing (3.3.3) as a PSS equation and thus, we need to look for notions that do not require \(C^{\infty}\) regularity in the studied objects.
**Definition 3.4**.: (\(C^{k}\) PSS modelled by \(\mathcal{B}\) and B-PSS equation, [31, Definition 2.1]) _Let \(\mathcal{B}\) be a function space. A differential equation (3.3.1), for a dependent variable \(u\in\mathcal{B}\), is said to describe a pseudospherical surface of class \(C^{k}\) modelled by \(\mathcal{B}\), \(k\in\mathbb{N}\), or it is said to be of pseudospherical type modelled by \(\mathcal{B}\), if it is a necessary and sufficient condition for the existence of functions \(f_{ij}=f_{ij}(x,t,u,u_{(1)},\cdots,u_{(\ell)})\), \(1\leq i\leq 3,\,1\leq j\leq 2\), depending on \(u\) and its derivatives up to a finite order \(\ell\), such that:_
* \(\mathcal{B}\subseteq C^{k}\)
* _the functions_ \(f_{ij}\) _are_ \(C^{k}\) _with respect their arguments;_
* _the forms_ \[\omega_{i}=f_{i1}dx+f_{i2}dt,\quad 1\leq i\leq 3,\] (3.3.6) _satisfy the structure equations of a pseudospherical surface (_3.2.1_);_
* _the condition_ \(\omega_{1}\wedge\omega_{2}\not\equiv 0\) _is satisfied._
If the function space is clear from the context and no confusion is possible, we maintain the original terminology introduced in the works by Tenenblat and co-authors and simply say PSS equation in place of \(\mathcal{B}-\)PSS equation.
Whichever function space \(\mathcal{B}\) is, the first condition asks it to be a subset of \(C^{k}\), that is the space who utterly controls the regularity of the surface.
It is possible to find books in differential geometry requiring \(C^{2}\) metrics for a surface, which would force the one-forms being \(C^{2}\)[44, Theorem 4.24, page 153]. However, [34, Theorems 10-19 and 10-19, page 232] and [34, Theorem 10-18, page 232] require \(C^{1}\) regularity of the one-forms defining a surface (and thus, a \(C^{1}\) metric). It is worth noticing that this is the same regularity required by Hartman and Wintner [35, page 760], who proved a sort of Bonnet theorem requiring \(C^{1}\) metric of a surface defined on a domain in \(\mathbb{R}^{2}\).
**Remark 3.1**.: _The third condition in definition 3.4 is satisfied if we are able to find functions \(\mu_{1}\), \(\mu_{2}\) and \(\mu_{3}\), depending on \(u\) and its derivatives up to a finite order, vanishing identically on the solutions of the equation, that is,_
\[d\omega_{1}-\omega_{3}\wedge\omega_{2}=\mu_{1}dx\wedge dt,\ d\omega_{2}-\omega_ {1}\wedge\omega_{3}=\mu_{2}dx\wedge dt,\ d\omega_{3}-\omega_{1}\wedge\omega_{2 }=\mu_{3}dx\wedge dt,\]
_and_
\[\mu_{1}\big{|}_{\eqref{eq:C^{k}}}\equiv 0,\ \ \mu_{2}\big{|}_{\eqref{eq:C^{k}}} \equiv 0\ \ \mu_{3}\big{|}_{\eqref{eq:C^{k}}}\equiv 0.\]
**Remark 3.2**.: _In practical terms, the components of the functions \(f_{ij}\), jointly with the conditions in Definition 3.4, tells us the regularity we have to ask from the solution of the Cauchy problem in order to define a PSS. The final regularity that can be achieved is dictated by these coefficients and that required to grant the existence of solutions from the available tools for proving their well-posedness._
**Remark 3.3**.: _The fourth condition is present for technical reasons, to avoid the situation \(d\omega_{3}=0\), which would imply that \(\omega_{1}=\alpha\omega_{2}\), for some \(\alpha\in\mathbb{R}\). In practical aspects, this condition has to be verified case by case, depending on the solution. Despite being technical, this requirement truly ensures a surface structure in definition 3.4._
While definition 3.4 of \(\mathcal{B}-\)PSS equation has made only a minor modification in the previous one (that by Chern and Tenenblat), the same cannot be said about our proposed notion for a generic solution.
**Definition 3.5**.: (Generic solution, [31, Definition 2.2]) _A classical solution \(u:U\to\mathbb{R}\) of (3.3.1) is called generic solution for a \(C^{k}\) PSS equation (3.3.1) if:_
* \(u\in\mathcal{B}\)_;_
* _It is a strong solution in the sense of definition_ 3.2_;_
* _The one-forms (_3.3.6_) are_ \(C^{k}\) _on_ \(U\)
_d) There exists at least a simply connected open set \(\Omega\subseteq U\) such that \(\omega_{1}\wedge\omega_{2}\big{|}_{p}\neq 0\), for any \(p\in\Omega\)._
_Otherwise, \(u\) is said to be non-generic._
Let us show that the CH equation (1.0.1) is a \(C^{0}(H^{s}(\mathbb{R}),[0,T))\cap C^{1}(H^{s-1}(\mathbb{R}),[0,T))-\)PSS equation.
**Example 3.4**.: _Let \(\lambda\in\mathbb{R}\setminus\{0\}\); \(\Omega\subseteq\mathbb{R}\times[0,T)=:U\) be an open and simply connected set, \(u\) be a solution of the CH equation defined on \(\Omega\), with either \(u_{x}\big{|}_{\Omega}>0\) or \(u_{x}\big{|}_{\Omega}<0\); and suppose that \(u\) satisfies the CH equation on \(\mathring{U}=\mathbb{R}\times(0,T)\). Consider the triad of one-forms (2.2.2). A straightforward calculation shows that_
\[d\omega_{1}-\omega_{3}\wedge\omega_{2} = \Big{(}m_{t}+2u_{x}m+um_{x}\Big{)}dx\wedge dt,\] \[d\omega_{2}-\omega_{1}\wedge\omega_{3} = 0, \tag{3.3.7}\] \[d\omega_{3}-\omega_{1}\wedge\omega_{2} = -\Big{(}m_{t}+2u_{x}m+um_{x}\Big{)}dx\wedge dt,\]
_and_
\[\omega_{1}\wedge\omega_{2}=-\Big{(}\frac{\lambda}{2}+\frac{1}{2\lambda}-m \Big{)}u_{x}dx\wedge dt. \tag{3.3.8}\]
_Moreover, if \(u\) is a solution of the CH equation, we conclude that \(\omega_{1}\wedge\omega_{2}=0\) if and only if_
\[m=\frac{\lambda}{2}+\frac{1}{2\lambda}\quad\text{or}\quad u_{x}=0,\]
_that, substituted into (2.0.1), implies_
\[u(x,t)=c, \tag{3.3.9}\]
_for some constant \(c\)._
_The minimum of regularity we can require to define a surface is \(C^{1}\), see [34, Theorems 10-19 and 10-19, page 232]. Therefore, the component functions of the one-forms (2.2.2) have to be of this order, which in particular, implies \(m\in C^{1}\). As such, \(u\) has to be at least \(C^{3}\) with respect to \(x\) and \(C^{1}\) with respect to \(t\), with continuous mixed derivatives. As a result, the CH equation is a PSS equation modelled by the function space \(\mathcal{B}:=C^{3,1}(U)\) and \(u\) is a generic solution for the equation, bringing to \(\Omega\) the structure of a PSS._
Example 3.4 does not necessarily show that (3.3.3) can be seen as a PSS equation. However, if we restrict the solutions of the CH equation (1.0.1) to the class \(\mathcal{B}=C^{0}(H^{4}(\mathbb{R}),[0,T))\cap C^{1}(H^{3}(\mathbb{R}),[0,T)) \subseteq C^{3,1}(\mathbb{R}\times[0,T))\) as in proposition 3.1, then the _same_ one-forms (2.2.2) give
\[d\omega_{1}-\omega_{3}\wedge\omega_{2} = (1-\partial_{x}^{2})\Big{(}u_{t}+uu_{x}+\partial_{x}\Lambda^{-2} \Big{(}u^{2}+\frac{u_{x}^{2}}{2}\Big{)}\Big{)}dx\wedge dt,\] \[d\omega_{2}-\omega_{1}\wedge\omega_{3} = 0, \tag{3.3.10}\] \[d\omega_{3}-\omega_{1}\wedge\omega_{2} = -(1-\partial_{x}^{2})\Big{(}u_{t}+uu_{x}+\partial_{x}\Lambda^{-2} \Big{(}u^{2}+\frac{u_{x}^{2}}{2}\Big{)}\Big{)}dx\wedge dt,\]
and thus (3.3.3) is a PSS equation in the sense of definition 3.4.
In fact, we have the following result.
**Theorem 3.1**.: _Let \(T>0\) and consider the function space \(\mathcal{B}=C^{0}(H^{4}(\mathbb{R}),[0,T))\cap C^{1}(H^{3}(\mathbb{R}),[0,T)) \subseteq C^{3,1}(\mathbb{R}\times[0,T))\). Then the CH equation (1.0.1) is a PSS equation modelled by \(\mathcal{B}\) if and only if the non-local evolution equation (3.3.3) is a PSS equation modelled by \(\mathcal{B}\). Moreover, they describe exactly the same PSS, in the sense that \(u\in\mathcal{B}\) is a generic solution of (1.0.1) if and only if it is a generic solution of (3.3.3)._
While theorem 3.1 tells us that the geometric object described by (3.3.7) is identical to that given by (3.3.10), it does not say when or how we can determine whether we really have a PSS from a solution. Moreover, finding a solution of a highly non-linear equation like (1.0.1) is a rather non-trivial task.
One of the advantages of the modern methods for studying evolution PDEs is the fact that we can extract much information about properties of solutions, that we do not necessarily know explicitly, from the knowledge of an initial datum. The equivalence between Cauchy problems given by proposition 3.1 and theorem 3.1 suggest that we could have qualitative information from the surface provided that we know an initial datum.
In geometric terms, an initial datum uniquely defines a curve. The tools from analysis tell us that this curve, which we know, uniquely determines a solution. Ultimately, the curve then provides in a unique way a surface determined by the graph of the solution (in the sense that the one-forms (3.2.1) are uniquely described by \(u\)). Our goal now is to study qualitatively this (unique) graph from the point of view of PSS framework.
**Theorem 3.2**.: _Let \(u_{0}\in H^{4}(\mathbb{R})\) be a non-trivial initial datum, and consider the Cauchy problem (3.3.4). Then there exists a value \(T>0,\) uniquely determined by \(u_{0}\), and an open strip of height \(T\ \mathcal{S}=\mathbb{R}\times(0,T)\), such that the forms (2.2.2) are uniquely determined by \(u_{0}\), defined on \(\mathcal{S}\), and of class \(C^{1}\). Moreover, the Hamiltonian \(\mathcal{H}_{1}\), given in (2.0.2), provides a conserved quantity on the solutions of problem (3.3.4)._
By a non-trivial function we mean one that is not identically zero.
The geometric meaning of theorem 3.2 is the following: given a regular curve
\[\gamma(x)=(x,0,u_{0}(x)),\quad u_{0}\in H^{4}(\mathbb{R}), \tag{3.3.11}\]
let \(\Gamma:=\{\gamma(x),\,x\in\mathbb{R}\}\). Then we can uniquely determine a solution \(u(x,t)\) of the CH equation such that \(\Gamma\subseteq\overline{\text{Gr}(u)})\), where
\[\text{Gr}(u)=\{(x,t,u(x,t)),\,x\in\mathbb{R},\,t>0\}\]
and \(\overline{\text{Gr}(u)}\) denotes the closure of \(\text{Gr}(u)\).
Even though the existence of the forms (2.2.2) over a domain \(\mathcal{S}\neq\emptyset\) is a necessary condition for endowing \(\mathcal{S}\) with the structure of a PSS, it is not sufficient, since the condition \(\omega_{1}\wedge\omega_{2}\neq 0\) is fundamental for such, and theorem 3.2 says nothing about it.
It is worth mentioning that a solution \(u\) of the CH equation subject to an initial datum in \(H^{4}(\mathbb{R})\) is unique and its domain is determined by the initial datum [12, Proposition 2.7] and it has to be considered intrinsically with its domain. Moreover, the invariance of the conserved quantity \(\mathcal{H}_{1}\) in (2.0.2) implies \(u_{x}(\cdot,t)\in L^{2}(\mathbb{R})\), for each \(t\) for which the solution exists. Let us fix \(t_{0}\in(0,T)\). Then \(u_{x}(x,t_{0})\to 0\) as \(|x|\to\infty\). Since \(\mathcal{H}_{1}(0)>0\), then \(u(\cdot,t_{0})\not\equiv 0\) and cannot be constant. Therefore, \(u_{x}(\cdot,t_{0})\) cannot be constant either. As a result, we conclude the existence of two points \(x_{0}\) and \(x_{1}\) such that the mean value theorem implies \(u_{x}(x_{0},t_{0})=0\), whereas for the other we have
\(u_{x}(x_{1},t_{0})\neq 0\), say \(u_{x}(x_{1},t_{0})>0\). The continuity of \(u_{x}\) then implies the existence of an open and simply connected set \(\Omega\) such that \(u_{x}(\cdot,\cdot)\big{|}_{\Omega}>0\).
These comments prove the following result.
**Corollary 3.1**.: _Assume that \(u_{0}\) is a solution satisfying the conditions in theorem 3.2 and let \(u\) be the unique solution of (3.3.5). Then \(u_{x}(\cdot,\cdot)\) vanishes at a non-countable number of points of \(\mathcal{S}\). Moreover, there exist open and simply connected subsets \(\Omega\subseteq U\) such that \(u_{x}(x,t)\) does not vanish for any \((x,t)\in\Omega\)._
We have an even stronger result coming from the precedent lines.
**Corollary 3.2**.: _Any solution of (3.3.5), emanating from a non-trivial initial datum \(u_{0}\in H^{4}(\mathbb{R})\), is a generic solution in the sense of definition 3.4._
Theorem 3.2 and its corollaries show that any non-trivial initial datum determines a PSS, compare with [31, Theorem 2.2], and their proof is given in subsection 5.2. Due to [31, Theorem 2.2], these results are somewhat expected. The same, however, cannot be said about our next proclamation.
**Theorem 3.3**.: _Assume that \(u_{0}\in H^{4}(\mathbb{R})\) is a non-trivial, compactly supported initial datum, with \([a,b]=\text{supp}(u_{0})\) and \(u\) be the corresponding solution of (3.3.4). Then there exists two \(C^{1}\) curves \(\gamma_{+},\gamma_{-}:[0,T)\to\overline{\mathcal{S}}\), and two \(C^{1}\) functions \(E_{+},\,E_{-}:[0,T)\to\mathbb{R}\), where \(T\in\mathbb{R}\) and \(\mathcal{S}\subseteq\mathbb{R}^{2}\) are given in Theorem 3.2, such that:_
* \(\pi_{1}(\gamma_{-}(t))<\pi_{1}(\gamma_{+}(t))\)_, for any_ \(t\in[0,T)\)_, where_ \(\pi_{1}:\mathbb{R}^{2}\to\mathbb{R}\) _is the canonical projection_ \(\pi_{1}(x,t)=x\)_;_
* \(\gamma_{\pm}^{\prime}(t)\neq 0\)_, for any_ \(t\in(0,T)\)_;_
* _On the left of_ \(\gamma_{-}\)_, the first fundamental form is given by_ \[\begin{array}{rcl}g&=&\frac{1}{4}\Big{(}\lambda+\frac{1}{ \lambda}\Big{)}dx^{2}+2\Big{(}\frac{\lambda}{2}+\frac{1}{2\lambda}\Big{)} \Big{[}\Big{(}\frac{\lambda}{2}-\frac{1}{2\lambda}\Big{)}E_{-}(t)e^{x}-\frac{ 1}{2}-\frac{\lambda^{2}}{2}\Big{]}dxdt\\ &&+\Big{[}E_{-}(t)^{2}e^{2x}+\Big{(}\Big{(}\frac{\lambda}{2}-\frac{1}{2 \lambda}\Big{)}E_{-}(t)e^{x}-\frac{1}{2}-\frac{\lambda^{2}}{2}\Big{)}^{2} \Big{]}dt,\end{array}\] (3.3.12)
* _On the right of_ \(\gamma_{+}\)_, the first fundamental form is given by_ \[\begin{array}{rcl}g&=&\frac{1}{4}\Big{(}\lambda+\frac{1}{ \lambda}\Big{)}dx^{2}+2\Big{(}\frac{\lambda}{2}+\frac{1}{2\lambda}\Big{)} \Big{[}\Big{(}\frac{\lambda}{2}-\frac{1}{2\lambda}\Big{)}E_{+}(t)e^{-x}-\frac {1}{2}-\frac{\lambda^{2}}{2}\Big{]}dxdt\\ &&+\Big{[}E_{+}(t)^{2}e^{-2x}+\Big{(}\Big{(}\frac{\lambda}{2}-\frac{1}{2 \lambda}\Big{)}E_{+}(t)e^{-x}-\frac{1}{2}-\frac{\lambda^{2}}{2}\Big{)}^{2} \Big{]}dt.\end{array}\] (3.3.13)
If we denote by \((g)\) the matrix of the first fundamental form and fix \(t\in(0,T)\), then the metrics (3.3.12) and (3.3.13) can be written in a unified way, that is,
\[(g)=\begin{pmatrix}\frac{1}{4}\Big{(}\lambda+\frac{1}{\lambda}\Big{)}&-\frac {1}{4}(1+\lambda^{2})\Big{(}\lambda+\frac{1}{\lambda}\Big{)}\\ -\frac{1}{4}(1+\lambda^{2})\Big{(}\lambda+\frac{1}{\lambda}\Big{)}&\frac{1}{4 }\Big{(}1+\frac{1}{2\lambda}\Big{)}\end{pmatrix}+O(e^{-|x|})=:(g_{0})+O(e^{-|x|}),\]
as \(|x|\to\infty\), meaning that the matrix \((g)\) is an \(O(e^{-|x|})\) perturbation of the singular matrix \((g_{0})\) as \(|x|\to+\infty\). Therefore, the metric determined by a compactly supported initial datum becomes asymptotically singular, for each fixed \(t\in(0,T)\). Hence, for \(|x|\gg 1\) and \(t\) fixed, the components of the metric behave like the famous peakon solutions of the CH equation.
**Theorem 3.4**.: _If \(u_{0}\in H^{4}(\mathbb{R})\) and for some \(x_{0}\in\mathbb{R}\), we have_
\[u_{0}^{\prime}(x_{0})<-\frac{\|u_{0}\|_{1}}{\sqrt{2}}, \tag{3.3.14}\]
_then there exists \(0<T_{m}<\infty\) such that the metric (2.2.3), determined by the solution \(o\) (3.3.4), blows up as \(t\to T_{m}\). More precisely, the coefficients \(g_{11}\) and \(g_{12}\) are uniformly bounded whereas_
\[\liminf_{t\to T_{m}}\Big{(}\sup_{x\in\mathbb{R}}g_{22}(x,\tau)\Big{)}=+\infty. \tag{3.3.15}\]
Expression (3.3.15) says that the metric blows up for a finite value of \(t\) and then, the surface can only be defined on a proper subset of \(\mathbb{R}^{2}\).
While Theorem 3.3 tells us that the metric determined by an initial datum becomes asymptotically singular for each fixed \(t\) as long as the solution exists, theorem 3.4 shows us a different sort of singularity, in which the metric blows up over a strip of finite height. Our next result, however, informs us that a compactly supported initial datum actually leads to a singularity of the metric similar to that established in Theorem 3.4.
**Theorem 3.5**.: _If \(u_{0}\in H^{4}(\mathbb{R})\) is a non-trivial, compactly supported initial datum, then the metric (2.2.3), determined by the solution \(o\) (3.3.4), blows up within a strip of finite height._
Theorems 3.4 and 3.5 tell us the existence of a height for which the co-frame of dual forms \(\omega_{1}\) and \(\omega_{2}\) are well defined, but their corresponding metric becomes unbounded near some finite height, meaning that the metric, and the forms as well, are only well defined on a certain strip with infinite length, but finite height.
A completely different scenario is given by our next result.
**Theorem 3.6**.: _Let \(m_{0}\in H^{2}(\mathbb{R})\cap L^{1}(\mathbb{R})\) and \(u\) be the corresponding solution of (3.3.4). If \(m_{0}(x)\geq 0\) or \(m_{0}(x)\leq 0\), then (2.2.2) are \(C^{1}\) one-forms defined on \(\mathcal{S}=\mathbb{R}\times(0,\infty)\). Moreover, for any \(R>0\), there exists a simply connected set \(\mathcal{R}\subseteq\mathbb{R}^{2}\) such that \(\sqrt{x^{2}+t^{2}}>R\), for any \((x,t)\in\mathcal{R}\), and \(u_{x}\big{|}_{\mathcal{R}}>0\) or \(u_{x}\big{|}_{\mathcal{R}}<0\)._
Theorem 3.6 says that subsets of the domain of the solution of the CH equation that can be endowed with a PSS structure cannot be contained in any compact set. In view of this result, regions arbitrarily far away from the origin may be endowed with the structure of a PSS.
## 4 Preliminaries
In this section we present auxiliary results that will help us to prove technical theorems and will be of vital importance in order to establish our main results.
### Conserved quantities
The topics we discuss make implicit or explicit use of certain quantities that are conserved for solutions having enough decaying at infinity. For this reason we recall them from a geometric perspective.
A differential form \(\alpha\) is said to be _closed_ if \(d\alpha=0\), whereas it is called _exact_ when \(\alpha=d\beta\). Two closed forms are said to be equivalent if their difference is an exact one.
Given a differential equation (3.3.1), we say that a one-form \(\alpha=C^{0}dx+C^{1}dt\), whose coefficients depend on \((x,t,u)\) and derivatives of \(u\) up to a certain order, is a _conserved current_ if it is closed on the solutions \(u\) of the equation. In particular, note that
\[d\alpha=(\partial_{x}C^{1}-\partial_{t}C^{0})dx\wedge dt\]
and if \(\alpha\) is closed on the solution of the equation, then
\[(\partial_{x}C^{1}-\partial_{t}C^{0})\Big{|}_{\mathcal{E}=0}=0,\]
which is a conservation law for the equation. That said, it is a structural property of the equation, e.g. see [62, section 3].
Conserved currents differing from another one by an exact differential form determine exactly the same conservation law.
**Example 4.1**.: _Consider the following one-forms_
\[\alpha=u\,dx+\Big{(}\frac{3}{2}u^{2}-u_{tx}-uu_{xx}-\frac{1}{2}u_{x}^{2} \Big{)}\,dt\]
_and_
\[\beta=\frac{u^{2}+u_{x}^{2}}{2}\,dx+\Big{(}u^{3}-u^{2}u_{xx}-uu_{tx}+\Big{)}\,dt.\]
_A straightforward calculation shows that_
\[d\alpha=(m_{t}+2u_{x}m+um_{x})dx\wedge dt\ \ \text{and}\ \ d\beta=u(m_{t}+2u_{x}m+um_{x})dx \wedge dt.\]
_It is easy to see that on the solutions of the CH equation (2.0.1) these two one-forms are closed. Finally, observe that_
\[\tilde{\alpha}=(u-u_{xx})\,dx+\Big{(}\frac{3}{2}u^{2}-uu_{xx}-\frac{1}{2}u_{x} ^{2}\Big{)}\,dt\]
_is equivalent to the one-form \(\alpha\), since \(\tilde{\alpha}=\alpha-d(u_{x})\)._
Integrating the conservation law, we obtain
\[\frac{d}{dt}\int_{\mathbb{R}}C^{0}dx=\int_{\mathbb{R}}\Big{(}\frac{d}{dx}C^{1} \Big{)}dx=C^{1}\Big{|}_{-\infty}^{+\infty}.\]
If the quantity \(C^{1}\) has enough decaying at infinity and the integral in the left hand side of the equation above converges, then we have
\[\frac{d}{dt}\int_{\mathbb{R}}C^{0}dx=0.\]
Let
\[Q(t):=\int_{\mathbb{R}}C^{0}dx.\]
Assuming that \(Q\) is defined for \(t\in I\), where \(I\) is a connected set, then it is called _conserved quantity_. In particular, if \(0\in I\), we have \(Q(t)=Q(0)\) for any other \(t\in I\).
Returning to our example 4.1, from the forms \(\alpha\) and \(\tilde{\alpha}\) we conclude that
\[\mathcal{H}_{0}(t)=\int_{\mathbb{R}}u(x,t)dx=\int_{\mathbb{R}}m(x,t)dx \tag{4.1.1}\]
is a conserved quantity, whereas the first Hamiltonian in (2.0.2) is the conserved quantity emanating from the conserved current \(\beta\). Note that these quantities are only conserved for solutions decaying sufficiently fast as \(|x|\to\infty\).
While a conservation law is a structural property of the equation, the same cannot be said for the conserved quantity, e.g, see [62] for a better discussion. However, given a solution of the CH equation for which either (4.1.1) or (2.0.2) is conserved, if the corresponding functional exists for the initial datum, then this property persists and, even stronger, it remains invariant as long as the solution exists.
### Auxiliary and technical results
**Lemma 4.1**.: ([12, Proposition 2.7]) _If \(u_{0}\in H^{4}(\mathbb{R})\), then there exists a maximal time \(T=T(u_{0})>0\) and a unique solution \(u\) to the Cauchy problem (3.3.5) such that \(u=u(\cdot,u_{0})\in C^{0}(H^{4}(\mathbb{R}),[0,T))\cap C^{1}(H^{3}(\mathbb{R}),[0,T))\). Moreover, the solution depends continuously on the initial data, in the sense that the mapping \(u_{0}\mapsto u(\cdot,u_{0}):H^{4}(\mathbb{R})\to C^{0}(H^{4}(\mathbb{R}),[0,T) )\cap C^{1}(H^{3}(\mathbb{R}),[0,T))\) is continuous._
**Remark 4.1**.: _We observe that if, instead of \(u_{0}\in H^{4}(\mathbb{R})\), we assume \(u_{0}\in H^{s}(\mathbb{R})\), \(s>3/2\), we would then conclude that \(u\in C^{0}(H^{s}(\mathbb{R}),[0,T))\cap C^{1}(H^{s-1}(\mathbb{R}),[0,T))\), for the same \(T\), see [60, Theorem 3.2]._
**Lemma 4.2**.: ([30, Theorem 1.1]) _Assume that \(m_{0}\in H^{2}(\mathbb{R})\cap L^{1}(\mathbb{R})\). If \(m_{0}(x)\geq 0\) or \(m_{0}(x)\leq 0\), for any \(x\in\mathbb{R}\), then the corresponding solution \(u\) of the CH equation exists globally. In other words, the solution \(u\) of the CH equation belongs to the class \(C^{0}(H^{4}(\mathbb{R}),[0,\infty))\cap C^{1}(H^{3}(\mathbb{R}),[0,\infty))\)._
**Lemma 4.3**.: ([14, Theorem 3.1]) _Let \(u_{0}\in H^{3}(\mathbb{R})\) and \([0,T)\) be the maximal interval of existence of the corresponding solution of (3.3.5). Then_
\[\left\{\begin{array}{rcl}q_{t}(x,t)&=&u(q,t),\\ \\ q(x,0)&=&x,\end{array}\right. \tag{4.2.1}\]
_has a unique solution \(q\in C^{1}(\mathbb{R}\times[0,T),\mathbb{R})\). Moreover, for every fixed \(t\in[0,T)\), the function \(q(\cdot,t)\) is an increasing diffeomorphism of the line._
**Lemma 4.4**.: ([13, Theorem 4.2]) _Given an initial datum \(u_{0}\in H^{3}(\mathbb{R})\) satisfying (3.3.15), then the corresponding solution \(u\) of the CH equation subject to \(u(x,0)=u_{0}(x)\) breaks at finite time, that is, there exists a finite time \(T_{m}>0\) such that_
\[\lim_{t\to T_{m}}\inf\Big{(}\inf_{x\in\mathbb{R}}u_{x}(t,x)\Big{)}=-\infty. \tag{4.2.2}\]
**Lemma 4.5**.: ([13, Theorem 2.1]) _Let \(T>0\) and \(v\in C^{1}(H^{2}(\mathbb{R}),[0,T))\) be a given function. Then, for any \(t\in[0,T)\), there exists at least one point \(\xi(t)\in\mathbb{R}\) such that_
\[y(t):=\inf_{x\in\mathbb{R}}v_{x}(x,t)=v_{x}(\xi(t),t) \tag{4.2.3}\]
_and the function \(y\) is almost everywhere differentiable in \((0,T)\), with \(y^{\prime}(t)=v_{tx}(\xi(t),t)\) almost everywhere in \((0,T)\)._
**Lemma 4.6**.: ([37, Theorem 1.4]) _If \(u_{0}\in H^{4}(\mathbb{R})\), is compactly supported, then there exist \(C^{1}\) real valued functions \(E_{\pm}\) such that_
\[u(x,t)=\left\{\begin{array}{ll}E_{+}(t)e^{-x},&\mbox{for}\quad x>q(b,t),\\ \\ E_{-}(t)e^{x},&\mbox{for}\quad x<q(a,t),\end{array}\right.\]
_where \(q(\cdot,\cdot)\) is the function given in Lemma 4.3, for any \(t>0\) such that the solution exists._
The original statement of Lemma 4.6 says that \(s>5/2\) and the functions \(E_{\pm}\) are continuous. It is immediate then its validity for \(s=4\), that is our case, and a careful analysis on the proof of [37, Theorem 1.4] reveals that the functions are continuously differentiable.
## 5 Proof of the main results
### Proof of theorem 3.1
From (3.3.2), \(u\in\mathcal{B}\) is a solution of (1.0.1) in the sense of definition 3.2 if and only if it is a solution of (3.3.3) in the same sense. Let \(w_{0}\in H^{4}(\mathbb{R})\), \(u_{1}\) and \(u_{2}\) be the corresponding solutions of (1.0.1) and (3.3.3), respectively, subject to the same initial condition \(u_{1}(x,0)=u_{2}(x,0)=w_{0}(x)\). Proposition 3.1 combined with lemma 4.1 inform us that \(u_{1}=u_{2}\) and this is the only solution for both equations satisfying the given initial condition. As a result, they determine the same forms \(\omega_{1},\omega_{2},\omega_{3}\), and the same PSS as well.
### Proof of theorem 3.2
Lemma 4.1, jointly with remark 4.1 and Theorem 3.1, assures that (3.3.5) has a unique solution \(u\in C^{0}(H^{4}(\mathbb{R}),[0,T))\cap C^{1}(H^{3}(\mathbb{R}),[0,T))\subseteq C ^{3,1}(\mathbb{R}\times[0,T))\), for a \(T\) uniquely determined by \(u_{0}\). We then conclude that the one-forms (2.2.2) are \(C^{1}\) and defined on the open and connected set \(\mathcal{S}=\mathbb{R}\times(0,T)\).
Due to \(u_{0}\in H^{4}(\mathbb{R})\), then \(\|u_{0}\|_{1}<\infty\). Moreover, the functional \(\mathcal{H}_{1}(t)\), given in (2.0.2), is constant, that is, \(\mathcal{H}_{1}(t)=\mathcal{H}_{1}(0)\), \(t\in(0,T)\). Given that \(t\mapsto\mathcal{H}(t)=\|u\|_{1}^{2}/2\) is invariant, we conclude \(\|u\|_{1}=\|u_{0}\|_{1}\).
### Proof of Theorem 3.3
Let \(u\) be the corresponding solution of the CH equation subject to \(u(x,0)=u_{0}(x)\) and \(q\) be the function given by Lemma 4.3.
Define \(\varphi(x,t):\mathbb{R}\times[0,T)\rightarrow\mathbb{R}\times[0,T)\) by \(\varphi(x,t)=(q(x,t),t)\). Then \(\varphi\) is a bijection fixing \(\mathbb{R}\times\{0\}\) and \(\varphi\big{|}_{\mathbb{R}\times(0,T)}\) is a \(C^{1}\) diffeomorphism, see [31, Theorem 3.1].
Let \(\gamma_{\pm}:[0,T)\rightarrow\overline{\mathcal{S}}\) be given by \(\gamma_{-}(t)=\varphi(a,t)\) and \(\gamma_{+}(t)=\varphi(b,t)\). Then \(\gamma_{-}^{\prime}(t)=(u(\varphi(a,t)),1)\) and \(\gamma_{+}^{\prime}(t)=(u(\varphi(b,t)),1)\). Again, by Lemma 4.3 we have
\[\pi_{1}(\gamma_{-}(t))=q(a,t)<q(b,t)=\pi_{1}(\gamma_{+}(t)),\]
for each \(t\in(0,T)\).
Let \(p\in\mathcal{S}\) be a point on the left of \(\gamma_{-}\). This then implies that
\[x:=\pi_{1}(p)<\pi_{1}(\gamma_{-}(t))=q(a,t).\]
By Lemma 4.6 we have \(u(x,t)=E_{-}(t)e^{x}\), that substituted into (2.2.3) gives (3.3.12). To get (3.3.13) we proceed mimetically as before and for this reason is omitted.
### Proof of theorem 3.4
Let us define
\[y(t)=\inf_{x\in\mathbb{R}}u_{x}(x,t). \tag{5.4.1}\]
By lemma 4.5 we can find \(\xi(t)\) (despite the notation, it is not a function, see [13, Theorem 2.1]) such that \(y(t)=u_{x}(\xi(t),t)\) and it is an a.e. \(C^{1}\) function. Moreover, [13, Theorem 4.2] shows in its demonstration that \(y\) is Lipschitz and \(y(0)\leq u_{0}^{\prime}(x_{0})<0\).
Differentiating (3.3.3) with respect to \(x\) and using \(y(t)\) above, we obtain
\[y^{\prime}(t)+\frac{y(t)^{2}}{2}=u(\xi(t),t)^{2}-\Big{(}\partial_{x}\Lambda^{- 2}\Big{(}u^{2}+\frac{u_{x}^{2}}{2}\Big{)}\Big{)}(\xi(t),t).\]
In [12, page 240] it was proved that \(y(t)\) satisfies the differential inequality
\[y^{\prime}(t)\leq-\frac{\epsilon}{4}y(t)^{2},\]
for some \(\epsilon\in(0,1)\), implying that it is a negative and non-increasing function satisfying the inequality
\[\frac{\epsilon}{4}t+\frac{1}{y(0)}\leq\frac{1}{y(t)}. \tag{5.4.2}\]
Since \(y(t)<y(0)<0\), then (5.4.2) is only valid for a finite range of values for \(t\). As a result, we conclude the existence of \(T_{m}\) such that (5.4.2) holds for \(t\in(0,T_{m})\), and then, the solution \(u\), as a function of \(t\), is only defined on \((0,T_{m})\).
On the other hand, (5.4.2) can be seen in a slightly different way, since it implies
\[0\leq\frac{\epsilon}{4}t-\frac{1}{y(t)}\leq-\frac{1}{y(0)},\]
which tells us that \(y(t)\to-\infty\) before \(t\) reaches \(-4/(\epsilon y(0))\) (which gives an upper bound to \(T_{m}\)). As a result, if \((t_{k})_{k}\subseteq(0,T_{m})\) is a convergent sequence to \(T_{m}\), we then have \(y(t_{k})\to-\infty\) as \(k\to\infty\). This, in particular, is nothing but (4.2.2).
Let us evaluate the coefficients \(g_{ij}\) of the metric (2.2.3) at \(x=\xi(t)\). The Sobolev Embedding Theorem (see lemma 3.1) implies that \(u\) is uniformly bounded in \((0,T_{m})\) by \(\|u_{0}\|_{1}\). Since \(x=\xi(t)\) is a point of minima of the function \(u_{x}(\cdot,t)\), we conclude that \(u_{xx}(\xi(t),t)=0\) and thus, \(m(\xi(t),t)=u(\xi(t),t)\) is bounded as well. As a result, we conclude that both \(g_{11}(\xi(t),t)\) and \(g_{12}(\xi(t),t)\) are uniformly bounded for \(t\in(0,T_{m})\).
A different situation occurs with \(g_{22}\). The previous arguments show that \(g_{22}(\xi(t),t)=u_{x}(\xi(t),t)^{2}+B(u(\xi(t),t))\), where \(B(u(\xi(t),t))\) are the uniformly bounded remaining terms of the metric in \((0,T_{m})\).
For any sequence \((t_{k})_{k}\subseteq(0,T_{m})\) convergent to \(T_{m}\), we have
\[\sup_{x\in\mathbb{R}}g(x,t_{k})\geq g_{22}(\xi(t_{k}),t_{k})=u_{x}(\xi(t_{k} ),t_{k})^{2}+B(u(\xi(t_{k}),t_{k}))\to+\infty\]
as \(k\to\infty\), showing that
\[\sup_{(x,t)\in\mathbb{R}\times[0,T_{m})}g_{22}(x,t)=\lim_{t\to T_{m}}\inf_{\tau\geq t }\Big{(}\sup_{x\in\mathbb{R}}g_{22}(x,\tau)\Big{)}=+\infty.\]
### Proof of theorem 3.5
From (2.2.2) we have
\[f_{32}(x,t)=-u_{x}(x,t),\]
and, as a result,
\[\|f_{32}(\cdot,t)\|_{\infty}=\|u_{x}(\cdot,t)\|_{\infty}. \tag{5.5.1}\]
Therefrom, for each \(t\) such that the solution exist, we have
\[\int_{0}^{t}\|f_{32}(\cdot,\tau)\|_{\infty}\,d\tau=\int_{0}^{t}\|u_{x}(\cdot, \tau)\|_{\infty}\,d\tau. \tag{5.5.2}\]
By Theorem 3.1 and the conditions on the initial datum, we conclude that the function defined in (5.5.2) is continuous. Let us prove the existence of a height \(T_{m}<\infty\) such that \(\|f_{32}(\cdot,t)\|_{\infty}\to\infty\) as \(t\to T_{m}\).
The maximal height \(T_{m}\) corresponds to the maximal time of existence of the solution. Following [37, Corollary 1.1] or [3, Theorem 6.1], the conditions on the initial datum in Theorem 3.5 imply that the solution \(u\) can only exist for a finite time \(T_{m}\), implying on the existence of a maximal height \(T_{m}\) for the strip in Theorem 3.2.
By [37, Corollary 1.1, Eq. (1.20)] we then have
\[\int_{0}^{T_{m}}\|u_{x}(\cdot,\tau)\|_{\infty}\,d\tau=\infty.\]
On the other hand, the singularities of the solution arise only in the form of wave breaking. Moreover, we have the equivalence (e.g, see [50, page 525, Eq. (3.7)])
\[\int_{0}^{T_{m}}\|u_{x}(\cdot,\tau)\|_{\infty}\,d\tau=\infty\Longleftrightarrow \int_{0}^{T_{m}}\|y(\tau)\|_{\infty}\,d\tau=\infty, \tag{5.5.3}\]
where \(y(\cdot)\) is given by (5.4.1). Let \((t_{k})_{k\in\mathbb{N}}\subseteq(0,T_{m})\) be any sequence convergent to \(T_{m}\). By (5.5.3), (5.5.2), (5.4.1) and Lemma 4.5, we have \(y(t_{k})=u(\xi(t_{k}),t_{k})<\infty\) and
\[\int_{0}^{t_{k}}\|f_{32}(\cdot,\tau)\|_{\infty}\,d\tau<\infty,\]
for any \(k\in\mathbb{N}\), but
\[\lim_{k\to\infty}\int_{0}^{t_{k}}\|f_{32}(\cdot,\tau)\|_{\infty}\,d\tau=\infty,\]
meaning that \(|f_{32}(x,t)|\) becomes unbounded near some point of the line \(\mathbb{R}\times\{T_{m}\}\). Since \(g_{22}(x,t)\geq f_{32}(x,t)^{2}\), we have
\[\sup_{x\in\mathbb{R}}g(x,t_{k})\geq f_{32}(\xi(t_{k}),t_{k})\to\infty\]
as \(k\to\infty\), and we then get again
\[\sup_{(x,t)\in\mathbb{R}\times[0,T_{m})}g_{22}(x,t)=\lim_{t\to T_{m}}\inf_{\tau\geq t }\Big{(}\sup_{x\in\mathbb{R}}g_{22}(x,\tau)\Big{)}=+\infty, \tag{5.5.4}\]
which proves the result.
We can give a slightly different prove starting from (5.5.3). In fact, that condition implies on the wave breaking of the solution. According to McKean [48, 49], this only happens if and only if the points for which \(m_{0}(x)\) is positive lies to the left of those that \(m_{0}(x)\) is negative, see also [38, Theorem 1.1]. In other words, for some \(x_{0}\in\mathbb{R}\), we have \(m_{0}(x_{0})\geq 0\), for \(x\leq x_{0}\), whereas for \(x\geq x_{0}\) we have \(m_{0}(x_{0})\leq 0\). By [31, Theorem 3.3], we get back to (5.5.4).
### Proof of theorem 3.6
By lemma 4.2, \(u\) is a global solution in the class \(C^{0}(H^{4}(\mathbb{R}),[0,\infty))\cap C^{0}(H^{3}(\mathbb{R}),[0,\infty))\). In particular, it is defined on \(\mathcal{S}=\mathbb{R}\times(0,\infty)\) and, therefore, the coefficients \(f_{ij}\), \(1\leq i\leq 3\), \(1\leq j\leq 2\), of the one-forms (2.2.2) belong to the class \(C^{3,1}(\mathbb{R}\times(0,\infty))\subseteq C^{1}(\mathbb{R}\times(0,\infty))\), and then, \(g_{kl}\in C^{1}(\mathbb{R}\times(0,\infty))\), \(1\leq k,l\leq 2\).
By corollary 3.1 we know that \(\{\omega_{1},\omega_{2}\}\) cannot be linearly independent everywhere. Let \(R>0\), \(\overline{B}_{R}(0);=\{(x,t)\in U;\ x^{2}+t^{2}\leq R^{2}\}\), and \(W_{R}:=U\setminus\overline{B}_{R}(0)\).
Suppose that for some \(R>0\) we had \(u_{x}\big{|}_{W_{R}}=0\). Then \(u\big{|}_{W_{R}}=c\), for some \(c\in\mathbb{R}\), and since \(u\in L^{2}(\mathbb{R})\), we would conclude that \(c=0\), resulting in \(u\big{|}_{\mathcal{R}}=0\), for any open set \(\mathcal{R}\subseteq W_{R}\). Therefore, we can find numbers \(t_{0}>R\) and \(b>a>R\) such that \([a,b]\times\{t_{0}\}\subseteq\mathcal{R}\), \(u(x,t_{0})=u_{t}(x,t_{0})=0\), \(a\leq x\leq b\). From (3.3.3) we obtain
\[\partial_{x}\Lambda^{-2}\Big{(}u^{2}+\frac{u_{x}^{2}}{2}\Big{)}(x,t)=-\Big{(} u_{t}+uu_{x}\Big{)}(x,t).\]
Evaluating at \(t=t_{0}\) and letting \(x\in(a,b)\), we conclude that
\[F(x):=\partial_{x}\Lambda^{-2}\Big{(}u^{2}+\frac{u_{x}^{2}}{2}\Big{)}(x,t_{0}) =-\Big{(}u_{t}+uu_{x}\Big{)}(x,t_{0})\equiv 0,\]
implying \(F^{\prime}(x)=0\), \(x\in(a,b)\). Since \(\partial_{x}^{2}\Lambda^{2}=\Lambda^{-2}-1\), we get
\[0=F^{\prime}(x)=\Lambda^{-2}\Big{(}u^{2}+\frac{u_{x}^{2}}{2}\Big{)}(x,t_{0})= \frac{1}{2}\int_{\mathbb{R}}\frac{e^{-|x-y|}}{2}\Big{(}u^{2}+\frac{u_{x}^{2}} {2}\Big{)}(y,t_{0})dy,\quad x\in(a,b),\]
wherefrom we arrive at the conclusion \(u(x,t_{0})\equiv 0\), \(x\in\mathbb{R}\). This would then imply \(\|u\|_{1}=0\) at \(t=t_{0}\). The invariance of \(\|u\|_{1}\) implies \(u\equiv 0\), that conflicts with \(u_{0}\) being a non-trivial initial datum.
The contradiction above forces us to conclude that, for any \(R>0\), we can find \((x_{R},t_{R})\in W_{R}\) such that \(u_{x}(x_{R},t_{R})\neq 0\), meaning that we either have \(u_{x}(x_{R},t_{R})>0\) or \((x_{R},t_{R})<0\). Since \(u_{x}\) is continuous, we can find a neighborhood \(V_{R}\) of \((x_{R},t_{R})\) such that \(u_{x}\big{|}_{V_{R}}\) has the same sign.
## 6 Finite height vs finite time of existence
The results proved in [31] and those in theorems 3.4 and 3.5 suggest that the metric blows up as long as the solution develops a wave breaking. This is, indeed, the case.
**Theorem 6.1**.: _Let \(u\in C^{0}(H^{4}(\mathbb{R}),[0,T))\cap C^{1}(H^{3}(\mathbb{R}),[0,T))\) be a solution of the CH equation and \(g_{22}\) be the corresponding component of the metric tensor given in (2.2.3). Then \(g_{22}\) blows up within a strip of finite height if and only if \(u\) breaks in finite time._
Proof.: Let \(q\) be the function given in Lemma 4.3 and \(\varphi(x,t)=(q(x,t),t)\) be the bijection given in the proof of Theorem 3.3 (see subsection 5.3). As long as the solution exists for \(t>0\) and taking (2.0.1) into account, we have
\[\frac{d}{dt}m(\varphi(x,t))=(m_{t}+um_{x})(\varphi(x,t))=-2(u_{x}m)(\varphi(x, t)),\]
that is,
\[m(\varphi(x,t))=m_{0}(x)e^{-2\int_{0}^{t}u_{x}(\varphi(x,\tau))d\tau}. \tag{6.0.1}\]
From (2.2.3) we obtain
\[u_{x}(x,t)^{2}\leq g(x,t)\leq u_{x}(x,t)^{2}+(um)(x,t)^{2}+\text{lower order terms},\]
that, after neglecting the lower order terms, implies
\[|u_{x}(x,t)|\leq\sqrt{g(x,t)}\leq|u_{x}(x,t)|+|(um)(x,t)|. \tag{6.0.2}\]
Since \(u\in C^{0}(H^{4}(\mathbb{R}),[0,T))\cap C^{1}(H^{3}(\mathbb{R}),[0,T))\) and \(m_{0}(x)=m(x,0)=u(x,0)-u_{xx}(x,0)\), Lemma 3.1 implies that \(\|m_{0}\|_{L^{\infty}}<\infty\). Moreover, we also have
\[\|u(\cdot,t)\|_{L^{\infty}}<\|u\|_{1}=\sqrt{2\mathcal{H}_{1}(0)},\]
where \(\mathcal{H}_{1}(t)\) is given by (2.0.2). These two estimates, put together into (6.0.2), imply
\[|u_{x}(x,t)|\leq\sqrt{g(x,t)}\leq|u_{x}(x,t)|+\sqrt{2\mathcal{H}_{1}(0)}\|m_{ 0}\|_{L^{\infty}}e^{2\int_{0}^{t}\|u_{x}(\cdot,\tau)\|_{L^{\infty}}d\tau}, \tag{6.0.3}\]
where we used that \(\|u_{x}(\cdot,\tau)\|_{L^{\infty}}=\|u_{x}(\varphi(x,\tau))\|_{L^{\infty}}\).
Inequalities (6.0.3) combined with (5.5.3) show that \(g_{22}\) blows up in a strip of finite height if and only if \(u_{x}\) blows up in finite time. Hence, we have
\[\sup_{(x,t)\in\mathbb{R}\times[0,T)}g_{22}(x,t)=\infty\Longleftrightarrow\liminf _{t\to T}\big{(}\inf_{x\in\mathbb{R}}u_{x}(x,t)\big{)}=-\infty.\]
In particular, the maximal height of the strip coincides with the maximal time of existence of the solutions.
## 7 Examples
We give two examples illustrating qualitative aspects of the surfaces determined by solutions of the CH equation once an initial datum is known.
**Example 7.1**.: _Let us consider \(m_{0}(x)=e^{-x^{2}}\). As a consequence of (5.4.2), \(m>0\) and so does the corresponding solution \(u\). As a result of theorem 3.1 and its corollaries, \(u\) is a generic solution of the CH equation in the sense of definition 3.4._
_By theorem 3.6, the one-forms (2.2.2) are defined on \(\mathcal{S}=\mathbb{R}\times(0,T)\) and they endow an infinite number of simply connected open sets \(\Omega\subseteq U\) with the structure of a PSS._
**Example 7.2**.: _Let us now consider the family of functions \(\phi_{n}(x)=e^{-nx^{2}}\), \(n\in\mathbb{N}\) and \(x\in\mathbb{R}\). As pointed out in [13, Example 4.3], for \(n\) sufficiently large, we have_
\[\phi_{n}^{\prime}(x)<-\frac{\|\phi_{n}\|_{1}}{\sqrt{2}}. \tag{7.0.1}\]
_Fix \(n\) large enough so that (7.0.1) and choose \(u_{0}=\phi_{n}\). As a consequence of theorem 3.4, we know that \(g_{22}\) blows up for some \(x\in\mathbb{R}\) as long as \(t\) approaches some value \(T_{m}\) determined by the initial datum._
We close this section with some words about the maximal time \(T_{m}\) of existence (lifespan) of a solution of the CH equation emanating from an initial datum in Sobolev spaces. From theorem 3.2 we know that \(u\), and the metric as well, will become unbounded before reaching a certain value determined by the initial datum. The question is: do we have any sort of information about how it is determined? An answer for this question is provided by [19, Theorem 0.1], which shows a lower bound for it:
\[T_{m}\geq T(u_{0}):=-\frac{2}{\|u_{0}\|_{1}}\arctan\Big{(}\frac{\|u_{0}\|_{1} }{\inf_{x\in\mathbb{R}}u_{0}^{\prime}(x)}\Big{)}.\]
For the initial datum \(u_{0}(x)=e^{-nx^{2}}\) considered in example 7.2, we have
\[T(u_{0})=2\sqrt[4]{\frac{2n}{\pi(n+1)^{2}}}\arctan\Big{(}\sqrt[4]{\frac{\pi e^ {2}(n+1)^{2}}{8n^{3}}}\Big{)}.\]
In particular, for \(n\gg 1\), we have
\[T(u_{0})=\sqrt{\frac{2e}{n}}+O(n^{-1}).\]
As a consequence of the quantities shown above, for the given initial datum in example 7.2 we can surely guarantee that only certain open, properly and simply connected sets contained in
\[\mathcal{S}=\mathbb{R}\times\Big{(}0,2\sqrt[4]{\frac{2n}{\pi(n+1)^{2}}}\arctan \Big{(}\sqrt[4]{\frac{\pi e^{2}(n+1)^{2}}{8n^{3}}}\Big{)}\Big{)}\]
can be endowed with a PSS structure.
## 8 Discussion
The connection between surfaces of constant Gaussian curvature \(\mathcal{K}=-1\) has a long history in differential geometry, dating back to the first half part of century XIX [61, page 17], see also [11, chapter 9] and [65, chapter 1].
Roughly half a century ago, a hot topic in mathematical physics emerged after certain hydrodynamics models, more precisely, the KdV equation, was shown to have remarkable properties [33]. In [47] there is a survey of results about the KdV equation and its importance for nourishing a new-born field whose most well known representative is just itself.
An explosion of works was seen during the 60 and 70's after [33] exploring properties of the KdV equation, while other quite special equations were also discovered sharing certain properties with
the KdV. In this context was proposed the AKNS method [1], which reinvigorated and boosted the field emerged after the KdV, currently called _integrable equations_ (very roughly and naively speaking, an equation sharing properties with the KdV equation). By that time, the interest on this sort of equations spred out fields, attracting people more inclined to analysis of PDEs and geometric aspects of these equations.
By the end of the 70's, [63] Sasaki showed an interesting connection between equations described by the AKNS method [1] and surfaces of Gaussian curvature \(\mathcal{K}=-1\), culminating in the seminal work by Chern and Tenenblat [10, section 1] who established the basis for what today is known as _PSS equations_. These works are roots for what Reyes called _geometric integrability_, see [55, 57, 58, 59].
Equation (1.0.1) was discovered in [24], but became famous after its derivation as a hydrodynamic model in a paper by Camassa and Holm [6], and named after then, see also the review [25]. Despite its physical relevance, like other integrable models physically relevant, it attracted the interests of different areas. Probably one of the most impacted was just analysis of PDEs. In particular, the works by Constantin and co-workers [5, 12, 13, 14, 15] payed a crucial role, creating and developing new tools for tackling the CH equation that would later be explored not only to the CH equation itself, but also for other similar models, see [19, 26, 27, 29, 36, 37, 45] to name a few. Most of these works, not to say all, deal with solutions of the CH equation with finite regularity.
Apparently, Constantin [14] was the first showing connections between the CH equation and the geometry of manifolds. However, it was not before the fundamental work by Reyes [56] that it was recognised as a PSS equation. Even though these two works are concerned with the same object (the CH equation), they are completely different in nature. In fact, the results reported by Constantin [14] are intrinsically related to Cauchy problems involving the CH equation, whereas those shown by Reyes are concerned with structural aspects of the equation itself, such as integrability and abstract two-dimensional surfaces.
The work by Reyes was followed by a number of works dealing with geometric aspects of CH type equations \(\dot{a}\)_la_ Chern and Tenenblat, see [7, 8, 17, 18, 28, 57, 58, 59, 62] and references therein.
Despite a tremendous research carried out since the works [5, 12, 13, 60, 14] and [55, 57], it is surprising that until now very few attention has been directed to geometric aspects of well-posed solutions and PSS equations. As far as I know, the first paper trying to make such a connection is [62], where qualitative analysis of a certain PSS equation was used to describe aspects of the corresponding metric. However, even this reference considered an analytic solution. A second attempt addressing this topic is [22], where Cauchy problems involving the equation considered in [28, 62] were studied. In spite of the efforts made in [22, 62], these works do not deal with solutions blowing up, which was first considered in [31].
In [31] the notions of \(C^{k}-\)PSS and generic solutions were first considered and the blow up of metrics determined by the solutions of the CH equation were shown for two situations, depending on how the sign of the momentum behaves, see [31, theorems 3.2 and 3.4]. However, no problems related to global nature, i.e, circumstances in which the co-frame can be defined on \(\mathbb{R}\times(0,\infty)\) or asymptotic behaviors of metrics, were considered.
The notions of generic solutions and PSS equations used in the current literature carry intrinsically \(C^{\infty}\) regularity and this brings issues in the study of surfaces in connection with Cauchy problems. This is even more dramatic for equations like the CH because they have different representations
depending on the sort of solutions one considers and they only coincide on certain Banach spaces. This explains why in [31] it was needed to step forward and introduce definitions 3.4 and 3.5.
Another important aspect of the connections between geometry and analysis made in the present paper is the condition \(\omega_{1}\wedge\omega_{2}\neq 0\). Whenever \(\omega_{1}\wedge\omega_{2}=0\) we have (3.3.9) holding on an open set \(\Omega\). This is a problem of unique continuation of solutions, whose answer would be impossible quite few time ago.
For \(c=0\), the answer for arbitrary open sets was given very recently in [45], see also [26, 27, 29]. As long as \(u\big{|}_{\Omega}=0\), for some open set \(\Omega\), then \(u\equiv 0\), see [45, Theorem 1.3]. Our solutions emanate from a non-trivial initial datum, and then, we cannot have \(u\equiv 0\) on an open set \(\Omega\) contained in the domain of \(u\). For \(c\neq 0\), it is unclear if we might have \(u\big{|}_{\Omega}=c\) since this unique continuation problem is still an open question, see [31, Discussion].
The proof of Corollary 3.1 shows that \(u_{x}(x,t)\) vanishes at least once, for each \(t\) as long as \(u\) is defined, see also [31, Theorem 2.3]. As a result, the domain of \(u\) cannot be wholly endowed with a PSS structure. The answer for the open question mentioned above would only clarify whether we may have open sets that cannot be endowed with a PSS structure (those in which \(u\) is constant). If its answer is that \(c=0\) (which I conjecture, it is the case), then Corollary 3.1 would imply that the domain of the solution has a non-countable set of points in which we loss the structure of a PSS equation, but such a set would be of zero measure. On the other hand, if the answer to the question is that we may have \(c\neq 0\), then we would have a situation whose geometric implication should be better understood, but would surely imply on the existence of subsets of the domain of \(u\), with positive measure, in which a PSS structure is not allowed.
Event though the ideas developed and better explored in this paper are mostly concerned with the CH equation, they can used to other PSS equations. The main point is that the techniques to deal with Cauchy problems may vary depending on the equation, and this will then impact in how to address the geometric problem. This can be seen by comparing the results established in the present paper and those in [22, 62]. In a subsequent paper [32], qualitative aspects of PSS equations determined by the solutions of the Degasperis-Procesi equation will be reported. As will be shown, the ideas introduced in the present manuscript can be applied to this model, but paying the price that they have to be customised in view of the different nature of the equation and the corresponding Cauchy problems.
## 9 Conclusion
In this paper we studied the influence of Cauchy problems and the PSS surfaces defined by the corresponding solutions. To this end, we had to propose a formulation of the notion of PSS equation and generic solutions. Our main results are reported in subsection 3.3, including the new already mentioned definitions. A remarkable fact reported is that any non-trivial initial datum gives rise to a PSS equation. In particular, we showed solutions breaking in finite time lead to metrics having blow up either.
## Acknowledgements
I am thankful to the Department of Mathematical Sciences of the Loughborough University for the warm hospitality and amazing work atmosphere. I am grateful to Jenya Ferapontov, Keti Tenenblat, Artur Sergyeyev and Sasha Veselov for stimulating discussions, as well as many suggestions given.
I am also indebted to Priscila Leal da Silva for her firm encouragement, support, suggestions and patience to read the first draft of this manuscript.
Last but not least, I want to thank CNPq (grant n\({}^{\text{o}}\) 310074/2021-5) and FAPESP (grants n\({}^{\text{o}}\) 2020/02055-0 and 2022/00163-6) for financial support.
| ```
カウシー問題を伴うカマッサーホルム方程式によって定義された擬球面は、ここでは検討されます。私たちは、グローバルなソリューションが対応する表面にどのように影響するかを調査し、metricの二つの種の Singularities を調査します。最初の Singularities は、共フレームが線形独立性を持ちません。別の Singularities は、ソリューションが爆発することを特徴とするものです。特に、メトリックが爆発するかどうかを示し、それが有限時間で解が破綻するかどうかを示します。
``` |
2306.05990 | Novikov type inequalities for orbifolds | We show a natural extension of the Novikov numbers associated to the basic
cohomology class of a closed $1$-form on an orbifold, thus proving
corresponding Novikov inequalities for the compact case. | Fabricio Valencia | 2023-06-09T15:59:35 | http://arxiv.org/abs/2306.05990v1 | # Novikov type inequalities for orbifolds
###### Abstract.
We show a natural extension of the Novikov numbers associated to the basic cohomology class of a closed 1-form on an orbifold, thus proving corresponding Novikov inequalities for the compact case.
2020 Mathematics Subject Classification: 22A22, 57R18, 57R70
###### Contents
* 1 Introduction
* 2 Some algebraic topology ingredients
* 2.1 Hurewicz \(G\)-homomorphism and coverings
* 2.2 Groupoid homology with local coefficients
* 3 Closed basic 1-forms
* 3.1 G-homomorphisms of periods
* 3.2 Closed basic 1-forms of Morse type
* 4 Novikov numbers for orbifolds
* 4.1 Novikov inequalities
## 1. Introduction
In the seminal works [17, 18] Novikov started a generalization of Morse theory in which instead of critical points of smooth functions he dealt with closed 1-forms and their zeros, thus obtaining inequalities that generalize the well known Morse inequalities. In order to do so Novikov defined numbers \(b_{j}(\xi)\) and \(q_{j}(\xi)\) which depend on a real cohomology class \(\xi\in H^{1}(M,\mathbb{R})\). Such numbers are known in the literature as the Novikov Betti and torsion numbers, respectively. It is worth mentioning that in the special case \(\xi=0\), which corresponds to classical Morse theory, the numbers \(b_{j}(\xi)\) recover the usual Betti numbers of the ambient manifold \(M\) and the numbers \(q_{j}(\xi)\) agree with the minimal number of generators of the torsion subgroup \(H^{1}(M,\mathbb{Z})\) inside \(H^{1}(M,\mathbb{R})\). The classical Novikov inequalities state that any closed 1-form \(\omega\) with Morse-type zeros has at least \(b_{j}(\xi)+q_{j}(\xi)+q_{j-1}(\xi)\) zeros of index \(j\) where \(\xi=[\omega]\in H^{1}(M,\mathbb{R})\) is the de Rham cohomology class of \(\omega\). Nowadays, Novikov theory is widely known, as it provides several applications in geometry, topology, analysis, and dynamics. The reader is recommended to visit [11] in order to get in touch with the fundamentals of Novikov theory as well as with some of the classical results around this beautiful subject.
The main purpose of this paper is to extend some of the ingredients of the classical Novikov theory to the context of orbifolds, thus providing a way to generalize the Novikov inequalities for the compact ones. We think of orbifolds as proper and etale groupoids up to Morita equivalence.
In this spirit, we deal with cohomology classes of closed basic \(1\)-forms on Lie groupoids whose critical orbits are nondegenerate in the sense of Morse-Bott theory.
Let \(G\rightrightarrows M\) be a Lie groupoid and let \(\Pi_{1}(G)\rightrightarrows M\) denote its fundamental groupoid of \(G\)-homotopy classes of \(G\)-paths. The fundamental group \(\Pi_{1}(G,x_{0})\) of \(G\) with respect to a base-point \(x_{0}\in M\) is defined to be isotropy group \(\Pi_{1}(G)_{x_{0}}\) which consists of \(G\)-homotopy classes of \(G\)-loops at \(x_{0}\). Let us denote by \(H_{\bullet}(G,\mathbb{Z})\) the total singular homology of \(G\). There is a canonical way to define a Hurewicz \(G\)-homomorphism \(h:\Pi_{1}(G,x_{0})\to H_{1}(G,\mathbb{Z})\) which restricts as an isomorphism to the abelianization of \(\Pi_{1}(G,x_{0})\). Let \(\omega\) be a closed basic \(1\)-form on \(M\). That is, a closed \(1\)-form satisfying \((t^{*}-s^{*})(\omega)=0\). For each smooth \(G\)-path \(\sigma=\sigma_{n}g_{n}\sigma_{n-1}\cdots\sigma_{1}g_{1}\sigma_{0}\) in \(M\) we can naturally define the \(G\)-path integral \(\int_{\sigma}\omega=\sum_{k=0}^{n}\int_{\sigma_{k}}\omega\). It canonically determines a group homomorphism \(l_{\omega}:\Pi_{1}(G,x_{0})\to(\mathbb{R},+)\) which factors through the Hurewicz \(G\)-homomorphism \(h\) by a uniquely determined group homomorphism \(\operatorname{Per}_{\xi}:H_{1}(G,\mathbb{Z})\to\mathbb{R}\) which only depends on the basic cohomology class \(\xi:=[\omega]\). This will be called the \(G\)-homomorphism of periods of \(\xi\). Let \(\mathbf{Nov}\) denote the Novikov ring of \(\mathbb{R}\) and let us further assume that our Lie groupoid \(G\rightrightarrows M\) is etale and proper with compact orbit space \(M/G\). Thus, we may define a ring homomorphism \(\phi_{\xi}:\mathbb{Z}(\Pi_{1}(G,x_{0}))\to\mathbf{Nov}\) by setting \(\phi_{\xi}([\sigma]):=\tau^{\operatorname{Per}_{\xi}(h([\sigma]))}\) for all \([\sigma]\in\Pi_{1}(G,x_{0})\). It yields a local system \(\mathcal{L}_{\xi}\) of left \(\mathbf{Nov}\)-modules over \(G\rightrightarrows M\) whose total homology groups \(H_{j}^{\operatorname{tot}}(G,\mathcal{L}_{\xi})\) are called Novikov homology groups of \(\xi\). It follows that the homology \(H_{j}^{\operatorname{tot}}(G,\mathcal{L}_{\xi})\) is a finitely generated module over the ring \(\mathbf{Nov}\). Since \(\mathbf{Nov}\) is a principal ideal domain we have that the module \(H_{j}^{\operatorname{tot}}(G,\mathcal{L}_{\xi})\) is a direct sum of a free submodule with a torsion submodule. The Novikov Betti number \(b_{j}(\xi)\) is defined to be the rank of the free summand of \(H_{j}^{\operatorname{tot}}(G,\mathcal{L}_{\xi})\) and the Novikov torsion number \(q_{j}(\xi)\) is defined to be the minimal number of generators of the torsion submodule of \(H_{j}^{\operatorname{tot}}(G,\mathcal{L}_{\xi})\).
The critical point set \(\operatorname{Crit}(\omega)\) of a closed basic \(1\)-form \(\omega\) on \(M\) is saturated in \(M\) so that it is formed by a disjoint union of groupoid orbits. Therefore, we say that a critical orbit \(\mathcal{O}_{x}\) of \(\omega\) is nondegenerate if its normal Hessian is a nondegenerate fiberwise bilinear symmetric form on \(\nu(O_{x})\). Accordingly, \(\omega\) is said to be of Morse type if all of its critical orbits are nondegenerate. Let us fix a groupoid metric on \(G\rightrightarrows M\) in the sense of del Hoyo and Fernandes, visit [9]. The nondegeneracy requirement over \(\mathcal{O}_{x}\) imposed above allows us to use the groupoid metric to split \(\nu(\mathcal{O}_{x})\) into the Whitney sum of two subbundles \(\nu_{-}(\mathcal{O}_{x})\oplus\nu_{+}(\mathcal{O}_{x})\) such that normal Hessian of \(\omega\) around \(\mathcal{O}_{x}\) is strictly negative on \(\nu_{-}(\mathcal{O}_{x})\) and strictly positive on \(\nu_{+}(\mathcal{O}_{x})\). Let \(G_{x}\) be the isotropy group at \(x\). From [19] we know that the normal Hessian is invariant with respect to the normal representation \(G_{x}\curvearrowright\nu(\mathcal{O}_{x})\) so that it preserves the splitting above since the normal representation is by isometries in this case. In consequence, we get a normal subrepresentation \(G_{x}\curvearrowright\nu_{-}(\mathcal{O}_{x})\). The stacky index of \(\mathcal{O}_{x}\) in \(M/G\) is defined to be \(\dim\nu_{-}(\mathcal{O}_{x})/G_{x}=\dim\nu_{-}(\mathcal{O}_{x})-\dim G_{x}\).
Our main result can be stated in the following way:
**Theorem**.: _Let \(G\rightrightarrows M\) be an etale and proper Lie groupoid such that the orbit space \(M/G\) is compact. Let \(\omega\) be a closed basic \(1\)-form on \(M\) of Morse type. If \(c_{j}(\omega)\) denotes the number of critical points in \(M/G\) having stacky Morse index \(j\) then_
\[c_{j}(\omega)\geq b_{j}(\xi)+q_{j}(\xi)+q_{j-1}(\xi),\]
_where \(\xi=[\omega]\) is the basic cohomology class of \(\omega\)._
It is worth mentioning that the natural generalization of the Novikov theory we have described above provides a tool for using topological methods to study zeros of symplectic vector fields on symplectic orbifolds.
The paper is structured as follows. In Section 2 we define the fundamental Lie groupoid of \(G\)-homotopy classes of \(G\)-paths and briefly recall the definition of the total singular homology and cohomology groups of a topological groupoid. We show a \(G\)-version of the Hurewicz homomorphism and exhibit how to construct a groupoid covering space over \(G\rightrightarrows M\) out of a fixed subgroup in \(\Pi_{1}(G,x_{0})\). This is the content of Propositions 2.2 and 2.4, respectively. Motivated by the notion of local coefficients system for topological spaces we introduce a corresponding notion as well as its associated homology in the groupoid setting. Indeed, we define a double chain complex out of a local system of modules over a Lie groupoid, thus proving that its associated total homology is Morita invariant, compare Lemma 2.7 and Proposition 2.8. We also show in Proposition 2.9 a \(G\)-version of the Eilenberg homology isomorphism. In Section 3 we study closed basic \(1\)-forms. We define the \(G\)-homomorphism of periods associated to the basic cohomology class \(\xi\) of a closed basic \(1\)-form \(\omega\) on \(M\) and explore some of its elementary properties. In particular, we characterize when \(\xi\) is an integral class and describe the groupoid covering space associated to the kernel of the \(G\)-homomorphism of periods of \(\xi\), see Propositions 3.4 and 3.5. We introduce closed basic \(1\)-forms of Morse type and prove that such a notion is Morita invariant, see Proposition 3.8. This gives rise to a notion of stacky closed \(1\)-form of Morse type over the differentiable stack \([M/G]\) presented by \(G\rightrightarrows M\). Finally, in Section 4 we present three different manners to define the Novikov numbers \(b_{j}(\xi)\) and \(q_{j}(\xi)\) associated to the basic cohomology class \(\xi\) over an orbifold. As a result we show the Novikov inequalities for compact orbifolds, compare Theorem 4.6, and quickly explain how to use this result in order to find a lower bound for the amount of zeros of certain symplectic vector fields on symplectic orbifolds.
**Acknowledgments:** Part of this work was carried out during a visit to the Dipartimento di Matematica, Universita degli Studi di Salerno, Fisciano, Italy in 2023. I am very thankful for the hospitality and support that the Geometry Group gave me while being there. I have benefited from several conversations with Juan Camilo Arias, Antonio Maglio, Cristian Ortiz and Luca Vitagliano, so that I am grateful for all their comments and suggestions that improved this work. Valencia was supported by Grants 2020/07704-7 and 2022/11994-6 Sao Paulo Research Foundation - FAPESP.
## 2. Some algebraic topology ingredients
Before defining the \(G\)-homomorphism of periods of a closed basic \(1\)-form as well as its corresponding Novikov numbers we have to fill out some algebraic topology gaps concerning the fundamental Lie groupoid of \(G\)-homotopy classes of \(G\)-paths which, to our knowledge, do not seem to have been described before in the literature; see for instance [16, s. 3.3] and [4, c. G; s. 3]. Let \(G\rightrightarrows M\) be a Lie groupoid [8, 16]. Throughout this paper the structural maps of \(G\) will be denoted by \((s,t,m,u,i)\) where \(s,t:G\to M\) are the maps respectively indicating the source and target of the arrows, \(m:G^{(2)}\to G\) stands for the partial composition of arrows, \(u:M\to G\) is the unit map, and \(i:G\to G\) is the map determined by the inversion of arrows. A \(G\)_-path_ in \(M\) is a sequence \(\sigma:=\sigma_{n}g_{n}\sigma_{n-1}\cdots\sigma_{1}g_{1}\sigma_{0}\) where \(\sigma_{0},\cdots,\sigma_{n}:[0,1]\to M\) are (piecewise smooth) paths in \(M\) and \(g_{1},\cdots,g_{n}\) are arrows in \(G\) such that \(g_{j}:\sigma_{j-1}(1)\to\sigma_{j}(0)\) for all \(j=1,\cdots,n\). We shall say that \(\sigma\) is a \(G\)-path of _order_\(n\) from \(\sigma_{0}(0)\) to \(\sigma_{n}(1)\). Our groupoid \(G\) is said to be \(G\)_-connected_ if for any two points \(x,y\in M\) there exists a \(G\)-path from \(x\) to \(y\). We will always assume that the groupoids we are working with are \(G\)-connected unless otherwise stated. If \(\sigma^{\prime}:=\sigma^{\prime}_{n}g^{\prime}_{n}\sigma^{\prime}_{n-1}\cdots \sigma^{\prime}_{1}g^{\prime}_{1}\sigma^{\prime}_{0}\) is another \(G\)-path with \(\sigma^{\prime}_{0}(0)=\sigma_{n}(1)\) then we can _concatenate_\(\sigma\) and
\(\sigma^{\prime}\) into a new \(G\)-path
\[\sigma*\sigma^{\prime}=\sigma^{\prime}_{n}g^{\prime}_{n}\sigma^{\prime}_{n-1} \cdots\sigma^{\prime}_{1}g^{\prime}_{1}\sigma^{\prime}_{0}1_{\sigma_{n}(1)} \sigma_{n}g_{n}\sigma_{n-1}\cdots\sigma_{1}g_{1}\sigma_{0}.\]
We define an equivalence relation in the set of \(G\)-paths which is generated by the following _multiplication equivalence_
\[\sigma_{n}g_{n}\cdots\sigma_{j+1}g_{j+1}\sigma_{j}g_{j}\sigma_{j-1}\cdots g_{1 }\sigma_{0}\quad\sim\quad\sigma_{n}g_{n}\cdots\sigma_{j+1}g_{j+1}g_{j}\sigma_ {j-1}\cdots g_{1}\sigma_{0},\]
if \(\sigma_{j}\) is the constant path for any \(0<j<n\), and _concatenation equivalence_
\[\sigma_{n}g_{n}\cdots g_{j+1}\sigma_{j}g_{j}\sigma_{j-1}g_{j-1}\cdots g_{1} \sigma_{0}\quad\sim\quad\sigma_{n}g_{n}\cdots g_{j+1}\sigma_{j}\cdot\sigma_{j -1}g_{j-1}\cdots g_{1}\alpha_{0},\]
if \(g_{j}=1_{\alpha_{j-1}(1)}\) for any \(0<j<n\) where \(\sigma_{j}\cdot\sigma_{j-1}\) stands to denote the standard concatenation of the paths \(\sigma_{j}\) and \(\sigma_{j-1}\). A _deformation_ between two \(G\)-paths \(\sigma\) and \(\sigma^{\prime}\) of the same order \(n\) from \(x\) to \(y\) consist of homotopies \(D_{j}:[0,1]\times[0,1]\to M\) from \(\sigma_{j}\) to \(\sigma^{\prime}_{j}\) for \(j=0,1,\cdots,n\) and paths \(d_{i}:[0,1]\to G\) from \(g_{j}\) to \(g^{\prime}_{j}\) for \(j=1,\cdots,n\) such that \(s\circ d_{j}=D_{j-1}(\cdot,1)\) and \(t\circ d_{j}=D_{j}(\cdot,0)\) for \(j=1,\cdots,n\) verifying \(D_{0}([0,1],0)=x\) and \(D_{n}([0,1],1)=y\). That is, a deformation as a continuous family of \(G\)-paths of order \(n\) from \(x\) to \(y\) which may be written as \(D_{n}(\tau,\cdot)d_{n}(\tau)\cdots d_{1}(\tau)D_{0}(\tau,\cdot)\) for \(\tau\in[0,1]\). Accordingly, two \(G\)-paths with fixed endpoints in \(M\) are said to be \(G\)_-homotopic_ if it is possible to pass from one to another by a sequence of equivalences and deformations. With the multiplication induced by concatenation it follows that the \(G\)-homotopy classes of \(G\)-paths form a Lie groupoid over \(M\) which is called the _fundamental groupoid_ of \(G\). This shall be denoted by \(\Pi_{1}(G)\rightrightarrows M\), see [16, s. 3.3]. The _fundamental group_ of \(G\) with respect to a base-point \(x_{0}\in M\) is the isotropy group \(\Pi_{1}(G,x_{0}):=\Pi_{1}(G)_{x_{0}}\). Note that it consists of \(G\)-homotopy classes of \(G\)-loops at \(x_{0}\) which are by definition the \(G\)-homotopy classes of \(G\)-paths from \(x_{0}\) to \(x_{0}\). It is simple to check that for any two different points \(x_{0},y_{0}\in M\) it holds that \(\Pi_{1}(G,x_{0})\) and \(\Pi_{1}(G,y_{0})\) are isomorphic by \(G\)-connectedness. Every Lie groupoid morphism \(\phi_{1}:G\to G^{\prime}\) covering \(\phi_{0}:M\to M^{\prime}\) induces another Lie groupoid morphism \((\phi_{1})_{*}:\Pi_{1}(G)\rightarrow\Pi_{1}(G^{\prime})\) by mapping \([\sigma]\) to \([(\phi_{1})_{*}(\sigma)]\), where \((\phi_{1})_{*}(\sigma)=(\phi_{0})_{*}(\sigma_{n})\phi_{1}(g_{n})(\phi_{0})_{*} (\sigma_{n-1})\cdots(\phi_{0})_{*}(\sigma_{1})\phi_{1}(g_{0})(\phi_{0})_{*}( \sigma_{0})\). This also covers \(\phi_{0}:M\to M^{\prime}\) so that it induces a Lie group homomorphism between the fundamental groups \((\phi_{1})_{*}:\Pi_{1}(G,x_{0})\rightarrow\Pi_{1}(G^{\prime},\phi_{0}(x_{0}))\). As an important feature we have that Morita equivalent groupoids have isomorphic fundamental groupoids, see [16, p. 195]. For specific details and results concerning the fundamental groupoid of a Lie groupoid the reader is recommended to visit [16, s. 3.3] and [4, c. G; s. 3].
Let us now consider the nerve of \(G\rightrightarrows M\). This is formed by manifolds \(\{G^{(n)}\}_{n\in\mathbb{N}}\) with \(G^{(0)}=M\), \(G^{(1)}=G\) and \(G^{(n)}=\{(g_{1},\cdots,g_{n}):s(g_{j})=t(g_{j+1})\}\). We also have the face maps \(d_{k}^{n}:G^{(n)}\to G^{(n-1)}\) given by the surjective submersions \(d_{0}^{1}=t\), \(d_{1}^{1}=s\) and
\[d_{k}^{n}(g_{1},\cdots,g_{n})=\left\{\begin{array}{lll}(g_{2},\cdots,g_{n})& \mbox{if}&k=0\\ (g_{1},\cdots,g_{i}g_{i+1},\cdots,g_{n})&\mbox{if}&0<k<n\\ (g_{1},\cdots,g_{n-1})&\mbox{if}&k=n.\end{array}\right.\]
These maps satisfy the simplicial relations \(d_{k}^{n-1}\circ d_{k^{\prime}}^{n}=d_{k^{\prime}-1}^{n-1}\circ d_{k}^{n}\) for \(k<k^{\prime}\). We define the singular double chain complex of \(G\rightrightarrows M\) as follows. This is determined by the data \(\{C_{\bullet}(G^{(\bullet)}),\partial,d\}\) where \(C_{q}(G^{(n)})\) is given by the set of singular \(q\)-chains on \(G^{(n)}\), \(\partial:C_{q}(G^{(n)})\to C_{q}(G^{(n-1)})\) is the simplicial boundary operator \(\partial=\sum_{k=0}^{n}(-1)^{k}d_{k}^{n}\), and \(d:C_{q}(G^{(n)})\to C_{q-1}(G^{(n)})\) is the usual singular boundary operator \(d=\sum_{j=0}^{q}(-1)^{j}d_{j}\). We can then consider the associated total complex \(\{\tilde{C}_{\bullet}(G),\delta\}\) where \(\tilde{C}_{v}(G)=\bigoplus_{q+n=v}C_{q}(G^{(n)})\) with total differential operator \(\delta:\partial\to G^{(n)}\).
\(\tilde{C}_{v}(G)\to\tilde{C}_{v-1}(G)\) defined as
\[\delta(\gamma)=(-1)^{q+n}\partial(\gamma)+(-1)^{q}d(\gamma),\qquad\gamma\in C_{q} (G^{(n)}).\]
The associated total singular homology will be denoted by \(H_{\bullet}(G,\mathbb{Z})\). It is worth mentioning that Morita equivalent groupoids have isomorphic total singular homologies, visit for instance [1, 7]. By following a similar approach it may be easily verified that the total singular cohomology \(H^{\bullet}(G,\mathbb{Z})\) of \(G\rightrightarrows M\) can be obtained by dualizing the total singular complex \(\tilde{C}_{\bullet}(G)\) in order to define another total complex \(\tilde{C}^{\bullet}(G)\) and then considering its associated cohomology. Also, by tensoring \(C_{\bullet}(G)\otimes_{\mathbb{Z}}\mathbb{R}\) and \(C^{\bullet}(G)\otimes_{\mathbb{Z}}\mathbb{R}\) we get the total singular homology and cohomology groups \(H_{\bullet}(G,\mathbb{R})\) and \(H^{\bullet}(G,\mathbb{R})\), respectively. Furthermore, via a spectral sequence argument it follows that the classical de Rham isomorphism for manifolds extends to define an isomorphism between the de Rham cohomology \(H^{\bullet}_{dR}(G)\) of \(G\) defined through the Bott-Shulman-Stasheff total complex and the singular cohomology \(H^{\bullet}(G,\mathbb{R})\). Here, the cohomology \(H^{\bullet}_{dR}(G)\) is defined as the total cohomology obtained from the double cochain complex \(\{\Omega_{\bullet}(G^{(\bullet)}),\overline{\partial},d_{dR}\}\) where \(\Omega_{q}(G^{(n)})\) is given by the differential \(q\)-forms on \(G^{(n)}\), \(\overline{\partial}:\Omega_{q}(G^{(n)})\to\Omega_{q}(G^{(n+1)})\) is the simplicial boundary operator \(\overline{\partial}=\sum_{k=0}^{n}(-1)^{k}(d_{k}^{n})^{*}\), and \(d_{dR}:\Omega_{q}(G^{(n)})\to\Omega_{q+1}(G^{(n)})\) is the usual the de Rham differential. As expected, all of these (co)homologies are Morita invariant. For more details visit [1].
### Hurewicz \(G\)-homomorphism and coverings
Let \(G\rightrightarrows M\) be a \(G\)-connected Lie groupoid and fix \(x_{0}\in M\). Consider \(\Pi_{1}(G,x_{0})\) and \(H_{1}(G,\mathbb{Z})\) the fundamental group of \(G\) with respect to the base-point \(x_{0}\) and the first total singular homology group of \(G\), respectively. It is simple to check that every \(G\)-loop at \(x_{0}\) canonically defines a \(1\)-cycle in \(\tilde{C}_{1}(G)=C_{0}(G)\oplus C_{1}(M)\). This suggests us to define the map \(h:\Pi_{1}(G,x_{0})\to H_{1}(G,\mathbb{Z})\) which sends the \(G\)-homotopy class \([\sigma]\) of a \(G\)-loop \(\sigma\) at \(x_{0}\) to its corresponding homology class \(|\sigma|\).
**Lemma 2.1**.: _Let \(\sigma\) and \(\sigma^{\prime}\) be two \(G\)-paths. The following assertions hold true:_
* \(\sigma+\sigma^{-1}\) _is equivalent to a boundary in_ \(\tilde{C}_{1}(G)\)_, and_
* _if_ \(\sigma^{\prime}_{0}(0)=\sigma_{n}(1)\) _then_ \(\sigma*\sigma^{\prime}-\sigma-\sigma^{\prime}\) _is also equivalent to a boundary in_ \(\tilde{C}_{1}(G)\)_._
Proof.: Firstly, for every path \(\sigma_{j}\) in \(\sigma\) we may argue as in Lemma 3.2 from [5, p. 173] and for every arrow \(g_{j}\) in \(\sigma\) we consider the pair of composable arrows \((g_{j},g_{j}^{-1})\). Secondly, observe that the expression \(\sigma*\sigma^{\prime}-\sigma-\sigma^{\prime}\) equals
\[\sigma_{n}*\sigma^{\prime}_{0}-\sigma_{n}-\sigma^{\prime}_{0}+\text{boundary}.\]
So, the last assertion follows directly as a consequence of Lemma 3.1 in [5, p. 173].
**Proposition 2.2** (Hurewicz \(G\)-homomorphism).: _The map \(h\) is a well defined group homomorphism which restricts to the abelianization \(\Pi_{1}^{ab}(G,x_{0})=\Pi_{1}(G,x_{0})/[\Pi_{1}(G,x_{0}),\Pi_{1}(G,x_{0})]\) as an isomorphism._
Proof.: We shall follow the usual strategy used to prove the classical version of this result, see for instance [5, c.4; s.3]. Throughout the proof we shall fix \(\sigma=\sigma_{n}g_{n}\sigma_{n-1}\cdots\sigma_{1}g_{1}\sigma_{0}\) a \(G\)-loop at \(x_{0}\). In order to see that \(h\) is well defined let us consider \(\sigma^{\prime}=\sigma^{\prime}_{n}g^{\prime}_{n}\sigma^{\prime}_{n-1}\cdots \sigma^{\prime}_{1}g^{\prime}_{1}\sigma^{\prime}_{0}\) another \(G\)-loop at \(x_{0}\) which is in the \(G\)-homotopy class of \(\sigma\). If \(\sigma\) and \(\sigma^{\prime}\) are equivalent then the assertion is trivial. Let us suppose then that there is a \(G\)-homotopy \(D(\tau,\cdot)=D_{n}(\tau,\cdot)d_{n}(\tau)\cdots d_{1}(\tau)D_{0}(\tau,\cdot)\) from \(\sigma\) to \(\sigma^{\prime}\). We split the square \([0,1]\times[0,1]\) into two \(2\)-simplices with vertices \(\{(0,0),(0,1),(1,1)\}\) and \(\{(0,0),(1,0),(1,1)\}\), respectively, so that every homotopy \(D_{j}\) can be regarded as the sum
of two singular \(2\)-simplices over \(M\). The diagonal from \((0,0)\) to \((1,1)\) will be denoted by \(\lambda\). Therefore, by using Lemma 2.1 we get:
\[\delta D = \sum_{j=1}^{n}(t\circ d_{j}-s\circ d_{j})+\sum_{j=0}^{n}(s\circ d_{ j+1}-D_{j}|_{\lambda}-\sigma_{j})-\sum_{j=0}^{n}(\sigma^{\prime}_{j}-D_{j}|_{ \lambda}-t\circ d_{j})\] \[- \sum_{j=1}^{n}(g^{\prime}_{j}-g_{j})+\mbox{constant}=\sigma- \sigma^{\prime}+\mbox{boundary},\]
where \(s\circ d_{n+1}:=D_{n}|_{[0,1]\times 1}=x_{0}\) and \(t\circ d_{0}:=D_{0}|_{[0,1]\times 0}=x_{0}\). That is, \(\sigma\) and \(\sigma^{\prime}\) are homologous which means that \(h\) is well defined. Again, as consequence of Lemma 2.1 we have that if \([\sigma],[\sigma^{\prime}]\in\Pi_{1}(G,x_{0})\) then
\[h([\sigma][\sigma^{\prime}])=h([\sigma*\sigma^{\prime}])=|\sigma*\sigma^{ \prime}|=|\sigma|+|\sigma^{\prime}|=h([\sigma])+h([\sigma]),\]
thus obtaining that \(h\) is a group homomorphism. Note that if \(\sigma^{-1}\) denotes the inverse \(G\)-loop of \(\sigma\) then \(h([\sigma^{-1}])=-|\sigma|\) since \(h\) is a homomorphism. Hence, by Lemma 2.1 it follows that the commutator subgroup of \(\Pi_{1}(G,x_{0})\) lies inside \(\ker(h)\) so that we get another well defined group homomorphism \(\tilde{h}:\Pi_{1}^{ab}(G,x_{0})\to H_{1}(G,\mathbb{Z})\).
Since our groupoid is \(G\)-connected we have that for each \(x\in M\) there exists some \(G\)-path \(\lambda_{x}\) from our base-point \(x_{0}\) to \(x\). The constant path at \(x\) is denoted by \(c_{x}\). Let us construct the inverse group homomorphism of \(\tilde{h}\). It is clear that we may think of every arrow \(g\in G\) as a \(G\)-path connecting the constant paths at \(s(g)\) and \(t(g)\). Also, it is obvious that every path \(\gamma:[0,1]\to M\) is equivalent to the \(G\)-path \(c_{\gamma(0)}1_{\gamma(0)}\gamma 1_{\gamma(1)}c_{\gamma(1)}\). In consequence, after possibly reversing orientations and fixing an order, each singular groupoid \(1\)-chain \(\alpha\in\tilde{C}_{1}(G)\) can be thought of as a formal sum of not necessarily different \(G\)-paths \(\alpha=\sum\alpha_{j}\). We define the homomorphism \(l:\tilde{C}_{1}(G)\to\Pi_{1}^{ab}(G,x_{0})\) as \(l(\alpha)=[\sum_{*}\lambda_{\alpha_{j}(1)}^{-1}*\alpha_{j}*\lambda_{\alpha_{j }(0)}]\), where \(\sum_{*}\) denotes the concatenation of all the \(G\)-loops at \(x_{0}\) given by \(\lambda_{\alpha_{j}(1)}^{-1}*\alpha_{j}*\lambda_{\alpha_{j}(0)}\). Note that the definition of \(l\) does not depend on the choice of the \(G\)-paths \(\lambda_{\alpha_{j}(0)}\) and \(\lambda_{\alpha_{j}(1)}\) since we are going into \(\Pi_{1}^{ab}(G,x_{0})\) instead of \(\Pi_{1}(G,x_{0})\). We claim that \(l\) takes the boundaries in \(\tilde{C}_{1}(G)\) into \(1\in\Pi_{1}^{ab}(G,x_{0})\). Indeed, it is clear that it suffices to check this assertion when we apply the boundary operator \(\delta\) over a pair of composable arrows \((g,g^{\prime})\in G^{(2)}\), a path \(\tilde{\sigma}:[0,1]\to G\) and a singular \(2\)-simplex \(\Sigma:\triangle_{2}\to M\). Firstly, \(\delta(g,g^{\prime})=g^{\prime}-g^{\prime}g+g\) so that
\[l(\delta(g,g^{\prime})) = l(g^{\prime})l(g)l(g^{\prime}g)^{-1}=[\lambda_{s(g^{\prime})}^{ -1}*g^{\prime}*\lambda_{t(g)}][\lambda_{t(g)}^{-1}*g*\lambda_{s(g)}][\lambda_{ s(g^{\prime})}^{-1}*g^{\prime}g*\lambda_{s(g)}]^{-1}\] \[= [\lambda_{s(g^{\prime})}^{-1}*g^{\prime}g(g^{\prime}g)^{-1}* \lambda_{s(g^{\prime})}]]=[\mbox{constant}]=1.\]
Secondly, \(\delta\tilde{\sigma}=t\circ\tilde{\sigma}-s\circ\tilde{\sigma}-(\tilde{ \sigma}(1)-\tilde{\sigma}(0))\). Note that
\[l(t\circ\tilde{\sigma}+\tilde{\sigma}(0)-s\circ\tilde{\sigma}-\tilde{\sigma}(1 ))=l(t\circ\tilde{\sigma})l(\tilde{\sigma}(0))l(s\circ\tilde{\sigma})^{-1}l( \tilde{\sigma}(1))^{-1}\]
\[=[\lambda_{t\circ\tilde{\sigma}(1)}^{-1}*t\circ\tilde{\sigma}*\lambda_{t\circ \tilde{\sigma}(0)}][\lambda_{t\circ\tilde{\sigma}(0)}^{-1}*\tilde{\sigma}(0)* \lambda_{s\circ\tilde{\sigma}(0)}][\lambda_{s\circ\tilde{\sigma}(1)}^{-1}*s \circ\tilde{\sigma}*\lambda_{s\circ\tilde{\sigma}(0)}]^{-1}[\lambda_{t\circ \tilde{\sigma}(1)}^{-1}*\tilde{\sigma}(1)*\lambda_{s\circ\tilde{\sigma}(1)}]^{ -1}\]
\[=[\lambda_{t\circ\tilde{\sigma}(1)}^{-1}*(t\circ\tilde{\sigma})\tilde{\sigma}(0) (s\circ\tilde{\sigma})^{-1}\tilde{\sigma}(1)^{-1}*\lambda_{t\circ\tilde{ \sigma}(1)}]=1,\]
since \(\tilde{\sigma}(1)\) is \(G\)-homotopic to the \(G\)-path \((t\circ\tilde{\sigma})\tilde{\sigma}(0)(s\circ\tilde{\sigma})^{-1}\), see [16, p. 191]. Thirdly, the case of the singular \(2\)-simplex \(\Sigma\) over \(M\) follows directly by Lemma 3.5 in [5, p. 174].
The fact we just proved implies that \(l\) descends to a well defined group homomorphism \(\tilde{l}:H_{1}(G,\mathbb{Z})\to\Pi_{1}^{ab}(G,x_{0})\). Furthermore, if \(\sigma\) is a \(G\)-loop at \(x_{0}\) then
\[(\tilde{l}\circ\tilde{h})([\sigma])=\tilde{l}(|\sigma|)=[\lambda_{x_{0}}^{-1}* \sigma*\lambda_{x_{0}}]=[\sigma],\]
since \(\lambda_{x_{0}}\) may be chosen to be just the constant \(G\)-path at \(x_{0}\). Let us now look at the opposite composition. Observe that the assignment \(x\mapsto\lambda_{x}\) allows us to send singular groupoid \(0\)-simplices into singular groupoid \(1\)-simplices and this can be clearly extended to a homomorphism \(\lambda:\tilde{C}_{0}(G)\to\tilde{C}_{1}(G)\). Suppose that \(\alpha=\sum\alpha_{j}\) is a singular groupoid \(1\)-chain. Then, by Lemma 2.1 it holds that
\[(\tilde{h}\circ l)(\alpha) = \tilde{h}([\sum_{*}\lambda_{\alpha_{j}(1)}^{-1}*\alpha_{j}* \lambda_{\alpha_{j}(0)}])=\sum|\lambda_{\alpha_{j}(1)}^{-1}*\alpha_{j}* \lambda_{\alpha_{j}(0)}|\] \[= \sum|\lambda_{\alpha_{j}(1)}^{-1}+\alpha_{j}+\lambda_{\alpha_{j} (0)}|=\sum|\lambda_{\alpha_{j}(0)}+\alpha_{j}-\lambda_{\alpha_{j}(1)}|\] \[= |\sum\alpha_{j}+\lambda_{\sum(\alpha_{j}(1)-\alpha_{j}(0))}|=| \alpha+\lambda_{\delta\alpha}|.\]
Therefore, if \(\alpha\) is a singular groupoid \(1\)-cycle it follows that \((\tilde{h}\circ l)(\alpha)=|\alpha|\). In particular, \((\tilde{h}\circ\tilde{l})(|\alpha|)=|\alpha|\), as desired.
Let us now consider the notion of covering space over a groupoid as defined in [16, s. 3.3] and [4, c. G; s. 3]. A _covering space_\(E\) over \(G\rightrightarrows M\) is a covering space \(p:E\to M\) equipped with a right \(G\)-action \(E\times_{M}G\to E\) along \(p\). Morphisms between two covering spaces \(E\) and \(F\) over \(G\) are equivariant maps \(f:E\to F\). It is clear that any such morphism is necessarily a covering projection. We will denote by \(\Gamma^{G}(E)\) the set of _equivariant automorphisms_ of \(E\). The map \(p\) extends to a Lie groupoid morphism \(p:E\rtimes G\to G\) from the action groupoid \(E\rtimes G\) into \(G\) defined by \(p(e,g)=g\), which covers the covering map \(E\to M\). We say that \(E\) is _universal_ if the action groupoid \(E\rtimes G\rightrightarrows E\) is \((E\rtimes G)\)-connected and the fundamental groupoid \(\Pi_{1}(E\rtimes G,e_{0})\) is trivial for one (hence all) base-point \(e_{0}\in E\). The latter kind of Lie groupoids shall be called _simply \(G\)-connected_.
**Example 2.3**.: There is an explicit way to construct the universal covering space over \(G\rightrightarrows M\) when is \(G\)-connected. Namely, the target fiber \(\Pi_{1}(G)(-,x_{0})\) at \(x_{0}\in M\) of the fundamental groupoid \(\Pi_{1}(G)\) of \(G\) becomes a covering space over \(M\) by restricting the source map \(s:\Pi_{1}(G)(-,x_{0})\to M\). In fact, this map is a left principal \(\Pi_{1}(G,x_{0})\)-bundle. Also, there is a natural right \(G\)-action on \(\Pi_{1}(G)(-,x_{0})\) along \(s\) given by \(([\sigma],g)\to[\sigma gc_{s(g)}]\), where \(c_{s(g)}\) is the constant map at \(s(g)\), which makes it into a covering space over \(G\). This is actually a universal covering space over \(G\), see [4, p. 612-613].
It is well known that \(G\)-paths have a unique path lifting property, visit [16, p. 191]. Let \(e_{0}\) be a base-point in \(E\), denote \(x_{0}=p(e_{0})\), and suppose that \(\sigma=\sigma_{n}g_{n}\sigma_{n-1}\cdots\sigma_{1}g_{1}\sigma_{0}\) is a \(G\)-path from \(x_{0}\) to \(x\). Then there are unique paths \(\tilde{\sigma_{n}},\sigma_{n-1}^{-},\cdots,\tilde{\sigma_{0}}\) in \(E\) with \(p_{*}(\tilde{\sigma_{j}})=\sigma_{j}\), \(\tilde{\sigma_{0}}(0)=e_{0}\) and \(\tilde{\sigma_{i}}(0)g_{j}=\sigma_{j-1}^{-}(1)\). By setting \(\tilde{\sigma_{j}}(0)=e_{j}\) and \((e_{j},g_{j}):e_{j}g_{j}\to e_{j}\) the arrows in \(E\rtimes G\) it follows that \(\tilde{\sigma}=\tilde{\sigma_{n}}(e_{n},g_{n})\sigma_{n-1}^{-}\cdots(e_{1},g_{1 })\tilde{\sigma_{0}}\) is the unique \((E\rtimes G)\)-path starting at \(e_{0}\) which projects onto \(\sigma\). Since \(G\)-homotopic paths are in this way lifted to \((E\rtimes G)\)-homotopic paths, it holds that any covering space over \(G\) with a base-point \(e_{0}\) as above has a natural fiber-wise action of \(\Pi_{1}(G,x_{0})\). Indeed, if \([\sigma]\) is the \(G\)-homotopy class of a \(G\)-loop at \(x_{0}\) then we define \(e_{0}\cdot[\sigma]:=\tilde{\sigma_{n}}(1)\). This defines a right action of \(\Pi_{1}(G,x_{0})\) on \(p^{-1}(x_{0})\). As a consequence of the previous lifting property we get that the induced group homomorphism \(\Pi_{1}(E\rtimes G,e_{0})\to\Pi_{1}(G,x_{0})\) is injective, see [4, p. 611].
Let us assume that the action groupoid \(E\rtimes G\rightrightarrows E\) is \((E\rtimes G)\)-connected. Thus, it is simple to check that the action of \(\Pi_{1}(G,x_{0})\) on \(p^{-1}(x_{0})\) is transitive which implies that \(p^{-1}(x_{0})\) is homogeneous. That is, it is isomorphic to the quotient \(\Pi_{1}(G,x_{0})/p_{*}(\Pi_{1}(E\rtimes G,e_{0}))\) since the isotropy at \(e_{0}\) agrees with \(p_{*}(\Pi_{1}(E\rtimes G,e_{0}))\) by definition. Observe that every element \(f\in\Gamma^{G}(E)\) induces an automorphism of \(p^{-1}(x_{0})\) when seeing it as a \(\Pi_{1}(G,x_{0})\)-space. Indeed,
if \(F:E\rtimes G\to E\rtimes G\) is the groupoid automorphism \(F(e,g)=(f(e),g)\) induced by \(f\) then note that the \((E\rtimes G)\)-path \(F_{*}(\tilde{\sigma})\) has initial point \(f(e_{0})\) and final point \(f(e_{0}\cdot[\sigma])\) and satisfies \(p_{*}(F_{*}(\tilde{\sigma}))=\sigma\), which means that it is also a lift of \(\sigma\). Therefore, \(f(e_{0})\cdot[\sigma]=f(e_{0}\cdot[\sigma])\) by uniqueness.
Just as in the classical case it follows that if our covering space \(E\) over \(G\) is universal then \(\Gamma^{G}(E)\cong\Pi_{1}(G,x_{0})\), see for instance [4, p. 612]. This important fact together with the previous comments allow us to prove the following result.
**Proposition 2.4**.: _If \(H\) is a subgroup of \(\Pi_{1}(G,x_{0})\) then there exists a covering \(r:F\to M\) over \(G\) such that \(r_{*}(\Pi_{1}(F\rtimes G,e_{0}))=H\)._
Proof.: Let us fix a universal covering \(p:E\to M\) over \(G\), see Example 2.3. As \(\Gamma^{G}(E)\cong\Pi_{1}(G,x_{0})\) and \(\Gamma^{G}(E)\) acts on the fibers \(p^{-1}(x_{0})\) without any fixed points then we may assume that \(\Pi_{1}(G,x_{0})\) acts on it without fixed points, see [4, p. 612]. Choose \(e_{0}\in p^{-1}(x_{0})\) and define the subgroup \(H^{\prime}\) of \(\Gamma^{G}(E)\) as follows: \(f\in H^{\prime}\) if and only if there exists \([\sigma]\in H\) such that \(f(e_{0})=e_{0}\cdot[\sigma]\). It is simple to check that \(H\) and \(H^{\prime}\) are isomorphic. As \(H^{\prime}\) is a subgroup of \(\Gamma^{G}(E)\) it is actually a properly discontinuous subgroup of diffeomorphisms of \(E\). Let us define \(F\) as the quotient manifold \(E/H^{\prime}\) and denote by \(q:E\to F\) the corresponding canonical projection. It is well known that this is also a covering space. Denote by \(r:F\to M\) the induced smooth map defined as \(r([e])=p(e)\). As \(r\circ q=p\) it follows that \(r\) is also a covering space. The right action of \(G\) along \(p:E\to M\) induces a well defined right action of \(G\) along \(q:E\to F\) as \([e]\cdot g:=[e\cdot g]\) which, in turn, induces another right action of \(G\) along \(r:F\to M\) by the same expression since the elements of \(\Gamma^{G}(E)\) are \(G\)-equivariant. Therefore, we get that \(\Pi_{1}(G,x_{0})\) acts transitively on the right of the fibers \(r^{-1}(x_{0})\). Let \(\tilde{x_{0}}=q(e_{0})\in r^{-1}(x_{0})\). Finally, by the construction of \(F\) it holds that the isotropy group of the right \(\Pi_{1}(G,x_{0})\)-action on \(r^{-1}(x_{0})\) corresponding to \(\tilde{x_{0}}\) is precisely the subgroup \(H\). That is, \(r_{*}(\Pi_{1}(F\rtimes G,e_{0}))=H\).
### Groupoid homology with local coefficients
Motivated by the notion of local coefficients system for topological spaces (see for instance [26, c. 6]), we introduce a corresponding notion as well as its associated homology in the groupoid setting. Let \(R\) be a ring. A _local system of \(R\)-modules_ over \(G\rightrightarrows M\) is defined as a function which assigns to any point \(x_{0}\in M\) a left \(R\)-module \(\mathcal{L}_{x}\) and to any continuous \(G\)-path \(\sigma\) from \(x\) to \(y\) an \(R\)-homomorphism \(\sigma_{\#}:\mathcal{L}_{y}\to\mathcal{L}_{x}\) such that the following conditions are satisfied:
* if \(\sigma\) and \(\sigma^{\prime}\) are \(G\)-homotopic then \(\sigma_{\#}=\sigma^{\prime}_{\#}\),
* if \(\sigma\) is the constant \(G\)-path at \(x_{0}\) then \(\sigma_{\#}:\mathcal{L}_{x_{0}}\to\mathcal{L}_{x_{0}}\) is the identity map, and
* if \(\sigma^{\prime}_{0}(0)=\sigma_{n}(1)\) then \((\sigma*\sigma^{\prime})_{\#}=\sigma_{\#}\circ\sigma^{\prime}_{\#}\).
Compare with [6, 22]. Note that for any two point \(x,y\in M\) it holds that \(\mathcal{L}_{x}\) and \(\mathcal{L}_{y}\) are isomorphic by \(G\)-connectedness. In particular, any \(G\)-loop \(\sigma\) at \(x_{0}\) determines an automorphism \(\sigma_{\#}:\mathcal{L}_{x}\to\mathcal{L}_{x}\) which depends only on its \(G\)-homotopy class. This implies that the correspondence \([\sigma]\mapsto\sigma_{\#}\) induces an action of \(\Pi_{1}(G,x_{0})\) on \(\mathcal{L}_{x_{0}}\), thus turning \(\mathcal{L}_{x_{0}}\) into a left module over the group ring \(R[\Pi_{1}(G,x_{0})]\). A _homomorphism_ between two local systems of \(R\)-modules \(\mathcal{L}\) and \(\mathcal{L}^{\prime}\) over \(G\) is just natural transformation \(\Phi:\mathcal{L}\to\mathcal{L}^{\prime}\). Namely, for each \(x_{0}\in M\) we have a homomorphism \(\Phi_{x_{0}}:\mathcal{L}_{x_{0}}\to\mathcal{L}^{\prime}_{x_{0}}\) such that for each \(G\)-path \(\sigma\) from \(x\) to \(y\) it holds \((\sigma_{\#})^{\prime}\circ\Phi_{y}=\Phi_{x}\circ\sigma_{\#}\). If each \(\Phi_{x}\) is an isomorphism then we say that \(\mathcal{L}\) and \(\mathcal{L}^{\prime}\) are _isomorphic_.
**Lemma 2.5**.: _Suppose that our groupoid is \(G\)-connected._
* _Let_ \(\mathcal{L}\) _and_ \(\mathcal{L}^{\prime}\) _be two local systems of_ \(R\)_-modules over_ \(G\rightrightarrows M\) _and let_ \(\phi:\mathcal{L}_{x_{0}}\to\mathcal{L}^{\prime}_{x_{0}}\) _be an isomorphism. Then there exists a unique isomorphism_ \(\Phi:\mathcal{L}\to\mathcal{L}^{\prime}\) _such that_ \(\Phi_{x_{0}}=\phi\)
* _Let_ \(\mathcal{L}_{0}\) _be an_ \(R\)_-module acted upon by_ \(\Pi_{1}(G,x_{0})\)_. Then there exists a local system of_ \(R\)_-modules_ \(\mathcal{L}\) _such that_ \(\mathcal{L}_{x_{0}}=\mathcal{L}_{0}\) _and which induces the given action of_ \(\Pi_{1}(G,x_{0})\) _on_ \(\mathcal{L}_{x_{0}}\)_._
Proof.: The proofs of these statements are straightforward adaptations of the proofs of Theorems 1.11 and 1.12 in [26, p. 263], so that they are left as an exercise to the reader.
**Example 2.6**.: Suppose that \(\alpha:\mathbb{Z}[\Pi_{1}(G,x_{0})]\to R\) is a ring homomorphism. We may view \(R\) as a left \(\mathbb{Z}[\Pi_{1}(G,x_{0})]\)-module with the action \([\sigma]\cdot r:=r\alpha([\sigma])\). Such an action commutes with the standard left \(R\)-module structure on \(R\). Therefore, by Lemma 2.5 it follows that \(\alpha\) determines a local system of \(\mathcal{L}_{\alpha}\) of \(R\)-modules over \(G\rightrightarrows M\). Note that each module \((\mathcal{L}_{\alpha})_{x}\) is isomorphic to \(R\).
Let \(\mathcal{L}\) be a local system of \(R\)-modules over \(G\) and consider the nerve structure \(\{G^{(n)},d_{k}^{n}\}\) of \(G\). Let \(\lambda_{n}:G^{(n)}\to G^{(0)}\) denote the "last vertex" map \(\lambda_{n}(g_{1},\cdots,g_{n})=s(g_{n})\) with \(\lambda_{0}=\operatorname{id}_{G^{(0)}}\). We define \(C^{q}(G^{(n)},\mathcal{L})\) as the set of all functions \(c\) with the following properties:
* for any \(n\)-singular \(q\)-simplex \(\Sigma:\triangle_{q}\to G^{(n)}\) the value \(c(\Sigma)\) is defined and belongs to the \(R\)-module \(\mathcal{L}_{\lambda_{n}(\Sigma(a_{0}))}\). Here we assume that \(a_{0}\) is the first vertex of the standard simplex \(\triangle_{q}\) with vertices \(a_{0},a_{1},\cdots,a_{q}\), and \(b\). the set of \(n\)-singular \(q\)-simpieces \(\Sigma\) such that \(c(\Sigma)\neq 0\) is finite.
The elements of \(C_{q}(G^{(n)},\mathcal{L})\) are called \(n\)_-singular \(q\)-chains with coefficients in \(\mathcal{L}\)_. Note that any chain \(c\in C_{q}(G^{(n)},\mathcal{L})\) can be formally written as a finite sum of the form \(c=\sum_{j}v_{j}\cdot\Sigma_{j}\) where \(\Sigma_{j}:\triangle_{q}\to G^{(n)}\) are \(n\)-singular \(q\)-simplices and \(v_{j}\in\mathcal{L}_{\lambda_{n}(\Sigma_{j}(a_{0}))}\). That is, \(c(\Sigma_{j})=v_{j}\) and \(c(\Sigma)=0\) for any \(n\)-singular \(q\)-simplex \(\Sigma\) different from \(\Sigma_{j}\). Let \(c=v\cdot\Sigma\) be an elementary \(n\)-chain (i.e. the sum above contains only one term). On the one hand, we define the boundary homomorphism \(\partial:C_{q}(G^{(n)},\mathcal{L})\to C_{q}(G^{(n-1)},\mathcal{L})\) as
\[\partial c=\partial(v\cdot\Sigma)=\sum_{k=0}^{n-1}(-1)^{k}v\cdot(d_{k}^{n} \circ\Sigma)+(-1)^{n}(g_{n}^{\Sigma})_{\#}^{-1}(v)\cdot(d_{n}^{n}\circ\Sigma).\]
Here \(g_{n}^{\Sigma}\) denotes the arrow determined by the \(n\)-projection of \(\Sigma(a_{0})\in G^{(n)}\) onto \(G^{(1)}\) and \((g_{n}^{\Sigma})_{\#}^{-1}\) is the inverse of the isomorphism \((g_{n}^{\Sigma})_{\#}:\mathcal{L}_{t(g_{n}^{\Sigma})}\to\mathcal{L}_{s(g_{n}^ {\Sigma})}\) which is induced by the arrow \(g_{n}^{\Sigma}\), viewed as a \(G\)-path from \(s(g_{n}^{\Sigma})=\lambda_{n}(\Sigma(a_{0}))\) to \(t(g_{n}^{\Sigma})=s(g_{n-1}^{\Sigma})=\lambda_{n-1}(d_{n}^{n}\circ\Sigma(a_{0 }))\). It is well known that \(\partial^{2}=0\), compare [6, 7, 22]. On the other hand, we also define the boundary operator \(\overline{d}:C_{q}(G^{(n)},\mathcal{L})\to C_{q-1}(G^{(n)},\mathcal{L})\) as
\[\overline{d}c=\overline{d}(v\cdot\Sigma)=\sigma_{\#}(v)\cdot d_{0}(\Sigma)+ \sum_{j=1}^{q}(-1)^{j}v\cdot d_{j}(\Sigma).\]
In this case \(d\) denotes the usual homology boundary operator and \(\sigma:[0,1]\to G^{(0)}\) is the path from \(\lambda_{n}(\Sigma(a_{1}))\) to \(\lambda_{n}(\Sigma(a_{0}))\) defined by \(\sigma(\tau)=\lambda_{n}(\Sigma((1-\tau)a_{1}+\tau a_{0}))\), which has corresponding isomorphism \(\sigma_{\#}:\mathcal{L}_{\lambda_{n}(\Sigma(a_{0}))}\to\mathcal{L}_{\lambda_{ n}(\Sigma(a_{1}))}\). This operator also satisfies \(\overline{d}^{2}=0\), see [26, p. 266].
**Lemma 2.7**.: _The boundary operators \(\partial\) and \(\overline{d}\) commute._
Proof.: On the one side,
\[\overline{d}(\partial(c)) = \sum_{k=0}^{n-1}(-1)^{k}(\sigma_{k})_{\#}(v)\cdot d_{0}(d_{k}^{n} \circ\Sigma))+\sum_{k=0}^{n-1}\sum_{j=1}^{q}(-1)^{k+j}v\cdot d_{j}(d_{k}^{n} \circ\Sigma))\] \[+ (-1)^{n}((\sigma_{n})_{\#}((g_{n}^{\Sigma})_{\#}^{-1}(v)))(d_{0}( d_{n}^{n}\circ\Sigma))+\sum_{j=1}^{q}(-1)^{j+n}(g_{n}^{\Sigma})_{\#}^{-1}(v) \cdot d_{j}(d_{n}^{n}\circ\Sigma)),\]
where \(\sigma_{k}(\tau)=(\lambda_{n-1}\circ d_{k}^{n})(\Sigma((1-\tau)a_{1}+\tau a_{ 0}))\) for all \(k=0,1,\cdots,n\). On the other side,
\[\partial(\overline{d}(c)) = \sum_{k=0}^{n-1}(-1)^{k}\sigma_{\#}(v)\cdot d_{k}^{n}\circ d_{0} (\Sigma)+\sum_{j=1}^{q}\sum_{k=0}^{n-1}(-1)^{j+k}v\cdot d_{k}^{n}\circ d_{j} (\Sigma)\] \[+ (-1)^{n}(g_{n}^{d_{0}(\Sigma)})_{\#}^{-1}(\sigma_{\#}(v))\cdot d _{n}^{n}\circ d_{0}(\Sigma)+\sum_{j=1}^{q}(-1)^{j+n}(g_{n}^{d_{j}(\Sigma)})_{ \#}^{-1}(v)\cdot d_{n}^{n}\circ d_{j}(\Sigma).\]
Firstly, note that for all \(k=0,1,\cdots,n-1\) it holds that \(\lambda_{n-1}\circ d_{k}^{n}=\lambda_{n}\) so that the first two expressions in the right hand side of the above equalities agree. The second two expressions are exactly the same. Secondly, as \(d_{j}(\Sigma)(a_{0})=\Sigma(a_{0})\) for all \(j=1,\cdots,q\) then \(g_{n}^{d_{j}(\Sigma)}=g_{n}^{\Sigma}\) which implies that the last expressions also agree. It remains to prove that \((\sigma_{n})_{\#}\circ(g_{n}^{\Sigma})_{\#}^{-1}=(g_{n}^{d_{0}(\Sigma)})_{\#} ^{-1}\circ(\sigma)_{\#}\). Observe that \(\lambda_{n-1}(d_{n}^{n}\circ\Sigma(a_{1}))=\lambda_{n-1}(\Sigma(a_{1}))\) so that \(\sigma_{n}*(g_{n}^{\Sigma})^{-1}\) and \((g_{n}^{d_{0}(\Sigma)})^{-1}*\sigma\) are two \(G\)-paths from \(\lambda_{n-1}(\Sigma(a_{1}))\) to \(\lambda_{n}(\Sigma(a_{0}))\). Define the path \(\gamma:[0,1]\to G^{(1)}\) as \(\gamma(\tau)=(\text{pr}_{n}(\Sigma((1-\tau)a_{1}+\tau a_{0})))^{-1}\). This is such that \(\gamma(0)=(g_{n}^{d_{0}(\Sigma)})^{-1}\) and \(\gamma(1)=(g_{n}^{\Sigma})^{-1}\). Also, \(t\circ\gamma=\sigma\) and \(s\circ\gamma=\sigma_{n}\). Therefore, since \(\gamma(1)\) is \(G\)-homotopic to the \(G\)-path \((s\circ\gamma)^{-1}\gamma(0)(t\circ\gamma)\) (see [16, p. 191]), it holds that the \(G\)-paths above are \(G\)-homotopic and the result follows as desired.
The total homology associated to the double chain complex \(\{C_{\bullet}(G^{(\bullet)},\mathcal{L}),\partial,\overline{d}\}\) will be denoted by \(H_{\bullet}^{\mathsf{tot}}(G,\mathcal{L})\) and called the _groupoid homology of \(G\rightrightarrows M\) with local coefficients in \(\mathcal{L}\)_. Let \(\phi:(G\rightrightarrows M)\to(G^{\prime}\rightrightarrows M^{\prime})\) be a Lie groupoid morphism and suppose that \(\mathcal{L}\) is a local system of \(R\)-modules over \(G^{\prime}\). By using the induced Lie groupoid morphism \(\phi_{*}:(\Pi_{1}(G)\rightrightarrows M)\to(\Pi_{1}(G^{\prime}) \rightrightarrows M^{\prime})\) it is possible to define another local system of \(R\)-modules \(\phi^{*}\mathcal{L}\) over \(G\) by setting \((\phi^{*}\mathcal{L})_{x}=\mathcal{L}_{\phi_{0}(x)}\). Let \(c=\sum_{j}v_{j}\cdot\Sigma_{j}\) be an \(n\)-singular \(q\)-chain in \(C_{q}(G^{(n)},\phi^{*}\mathcal{L})\). As \(v_{j}\in(\phi^{*}\mathcal{L})_{\lambda_{n}(\Sigma_{j})(a_{0})}=\mathcal{L}_{ \lambda_{n}^{\prime}(\phi_{n}\circ\Sigma_{j})(a_{0})}\) then we have a well defined map \(\phi_{*}:C_{q}(G^{(n)},\phi^{*}\mathcal{L})\to C_{q}(G^{\prime(n)}, \mathcal{L})\) given as \(\phi_{*}(\sum_{j}v_{j}\cdot\Sigma_{j})=\sum_{j}v_{j}\cdot(\phi_{n}\circ\Sigma _{j})\). It is simple to check that \(\phi_{*}\) commutes with both pairs of boundary operators \(\partial,\overline{d}\) and \(\partial^{\prime},\overline{d}^{\prime}\) so that we have a well defined group homomorphism \(\phi_{*}:H_{\bullet}^{\mathsf{tot}}(G,\phi^{*}\mathcal{L})\to H_{ \bullet}^{\mathsf{tot}}(G^{\prime},\mathcal{L})\). It is well known that if \(\phi\) is a Morita map then horizontal homologies defined by \(\partial\) and \(\partial^{\prime}\) are isomorphic, see [6, 7] and [16, p. 214]. Thus, by using a usual argument of spectral sequences (see for instance [3, p. 108]), we conclude:
**Proposition 2.8**.: _A Morita map \(\phi:G\to G^{\prime}\) induces an isomorphism \(\phi_{*}:H_{\bullet}^{\mathsf{tot}}(G,\phi^{*}\mathcal{L})\to H_{ \bullet}^{\mathsf{tot}}(G^{\prime},\mathcal{L})\)._
Let us fix a universal covering \(p:E\to M\) over \(G\) and consider the action groupoid \(E\rtimes G\rightrightarrows E\), see Example 2.3. Recall that \(p\) extends to a Lie groupoid morphism \(p:E\rtimes G\to G\), defined by \(p(e,g)=g\), which covers the covering map \(E\to M\). Pick \(x_{0}\in M\). We know that \(\Gamma^{G}(E)\cong\Pi_{1}(G,x_{0})\) where the isomorphism is as follows. The group \(\Gamma^{G}(E)\) acts simply transitively on the fiber \(p^{-1}(x_{0})\). Take \(e_{0}\in p^{-1}(x_{0})\). Given \(f\in\Gamma^{G}(E)\), let \(\tilde{\sigma}\) be a \(E\rtimes G\)-path joining \(f(e_{0})\) with \(e_{0}\). Define the map \(\beta:\Gamma^{G}(E)\to\Pi_{1}(G,x_{0})\) as \(\beta(f)=[p_{*}(\tilde{\sigma})]\). Observe that \(\beta\) does not depend on \(\tilde{\sigma}\) since \(\Pi_{1}(E\rtimes G,e_{0})\) is trivial. This is the isomorphism we are interested
in. For every \([\sigma]\in\Pi_{1}(G,x_{0})\) we denote by \(\beta_{[\sigma]}=\beta^{-1}([\sigma])\) the corresponding element in \(\Gamma^{G}(E)\). That is, \([\sigma]\) can be presented by \(p_{*}(\tilde{\sigma})\) where \(\tilde{\sigma}\) is any \(E\rtimes G\)-path from \(e_{0}\) to \(\beta_{[\sigma]}(e_{0})\).
Let us now consider nerve structure \(\{E^{(n)},(d_{k}^{n})^{E}\}\) of the action groupoid \(E\rtimes G\rightrightarrows E\) as well as its total singular groupoid homology. The group \(\Gamma^{G}(E)\) acts naturally on \(C_{q}(E^{(n)},\mathbb{Z})\) as follows. Let \(f_{1}:E\rtimes G\to E\rtimes G\) be the groupoid automorphism \(f_{1}(e,g)=(f_{0}(e),g)\) induced by \(f_{0}=f\in\Gamma^{G}(E)\). So, \(f\cdot\Sigma=f_{n}\circ\Sigma\) for \(\Sigma:\triangle_{q}\to E^{(n)}\) where \(f_{n}:E^{(n)}\to E^{(n)}\) is the induced map along the nerve. Let \(\mathcal{L}\) be a local system of \(R\)-modules over \(G\). Denote by \(\mathcal{L}_{0}:=\mathcal{L}_{x_{0}}\). It is clear that \(\Pi_{1}(G,x_{0})\) acts on \(\mathcal{L}_{0}\) from the left. We transform this action into a right action of \(\Gamma^{G}(E)\) on \(\mathcal{L}_{0}\) by setting \(v\cdot\beta_{[\sigma]}=(\sigma_{\#})^{-1}(v)\). Let us denote by \(\mathcal{L}_{0}\otimes_{\Gamma^{G}(E)}C_{q}(E^{(n)},\mathbb{Z})\) the quotient group of the tensor product \(\mathcal{L}_{0}\otimes C_{q}(E^{(n)},\mathbb{Z})\) by the subgroup \(Q_{q}(\mathcal{L}_{0},E^{(n)})\) generated by all elements of the form \(v\cdot f\otimes\Sigma-v\otimes f\cdot\Sigma\) with \(v\in\mathcal{L}_{0}\), \(f\in\Gamma^{G}(E)\), and \(\Sigma\in C_{q}(E^{(n)},\mathbb{Z})\). Observe that, after naturally extending them to the tensor product, the simplicial boundary operator \(\partial\) maps \(Q_{q}(\mathcal{L}_{0},E^{(n)})\) onto \(Q_{q}(\mathcal{L}_{0},E^{(n-1)})\) and the homology boundary operator \(d\) maps \(Q_{q}(\mathcal{L}_{0},E^{(n)})\) onto \(Q_{q-1}(\mathcal{L}_{0},E^{(n)})\) so that they pass to the quotient, thus giving rise to well defined boundary operators \(\partial:\mathcal{L}_{0}\otimes_{\Gamma^{G}(E)}C_{q}(E^{(n)},\mathbb{Z}) \to\mathcal{L}_{0}\otimes_{\Gamma^{G}(E)}C_{q}(E^{(n-1)},\mathbb{Z})\) and \(d:\mathcal{L}_{0}\otimes_{\Gamma^{G}(E)}C_{q}(E^{(n)},\mathbb{Z})\to \mathcal{L}_{0}\otimes_{\Gamma^{G}(E)}C_{q-1}(E^{(n)},\mathbb{Z})\). Hence, we have obtained a double chain complex \(\{\mathcal{L}_{0}\otimes_{\Gamma^{G}(E)}C_{\bullet}(E^{(\bullet)},\mathbb{Z}),\partial,d\}\) whose associated total cohomology will be denoted by \(H_{\bullet}(\mathcal{L}_{0}\otimes_{\Gamma^{G}(E)}E,\mathbb{Z})\). We are now in conditions to prove our \(G\)-version of the well known _Eilenberg homology isomorphism_, namely:
**Proposition 2.9** (Eilenberg \(G\)-isomorphism).: _There exists a chain isomorphism between \(\mathcal{L}_{0}\otimes_{\Gamma^{G}(E)}C_{\bullet}(E^{(\bullet)},\mathbb{Z})\) and \(C_{\bullet}(G^{(\bullet)},\mathcal{L})\). In particular, the total homologies \(H_{\bullet}(\mathcal{L}_{0}\otimes_{\Gamma^{G}(E)}E,\mathbb{Z})\) and \(H_{\bullet}^{\rm tot}(G,\mathcal{L})\) are isomorphic._
Proof.: Let us first exhibit isomorphisms \(\tilde{p}:\mathcal{L}_{0}\otimes_{\Gamma^{G}(E)}C_{q}(E^{(n)},\mathbb{Z}) \to C_{q}(G^{(n)},\mathcal{L})\). It is clear that the simply \(E\rtimes G\)-connectedness implies that for each \(e\in E\) there exists a unique \(E\rtimes G\)-homotopy class \([\xi_{e}]\) presented by any \(E\rtimes G\)-path \(\xi_{e}\) from \(e\) to \(e_{0}\). Denote by \(p_{n}:E^{(n)}\to G^{(n)}\) the map induced by \(p\) along the nerves. Let \(v_{0}\in\mathcal{L}_{0}\), \(\Sigma:\triangle_{q}\to E^{(n)}\), \(\Lambda=p_{n}\circ\Sigma\), \(e=\lambda_{n}^{E}(\Sigma(a_{0}))\), \(x=\lambda_{n}^{G}(\Lambda(a_{0}))=p(e)\) and define \(\tilde{p}_{o}:\mathcal{L}_{0}\otimes C_{q}(E^{(n)},\mathbb{Z})\to C_{q}(G^{(n )},\mathcal{L})\) by
\[\tilde{p}_{o}(v_{0}\otimes\Sigma)=v\cdot\Lambda,\]
where \(v=(p_{*}(\xi_{e}))_{\#}(v_{0})\in\mathcal{L}_{x}\). Analogously to how it was proven in Theorem 3.4 from [26, p. 279] it is simple to check that \(\tilde{p}_{o}Q_{q}(\mathcal{L}_{0},E^{(n)})=0\), so that \(\tilde{p}_{o}\) induces a homomorphism \(\tilde{p}:\mathcal{L}_{0}\otimes_{\Gamma^{G}(E)}C_{q}(E^{(n)},\mathbb{Z}) \to C_{q}(G^{(n)},\mathcal{L})\) which is actually an isomorphism. It remains to verify that these maps determine a chain map between the double chain complexes involved.
Firstly, recall that \(d_{j}(\Sigma(a_{0}))=\Sigma(a_{0})\) for all \(1\leq j\leq q\) so that
\[\tilde{p}_{o}(d_{j}(v_{0}\otimes\Sigma))=\tilde{p}_{o}(v_{0}\otimes d_{j}( \Sigma))=v\cdot(p_{n}\circ(d_{j}(\Sigma)))=v\cdot d_{j}(\Lambda)=\overline{d}_{ j}(\tilde{p}_{o}(v_{0}\otimes\Sigma)).\]
As usual, denote by \(\sigma(\tau)=\lambda_{n}^{E}(\Sigma((1-\tau)a_{1}+\tau a_{0}))\). Note that \(e^{\prime}=\lambda_{n}^{E}(d_{0}(\Sigma)(a_{0}))=\lambda_{n}^{E}(\Sigma(a_{1}))\), thus obtaining \(\xi_{e^{\prime}}=\sigma*\xi_{e}\). Hence
\[\tilde{p}_{o}(d_{0}(v_{0}\otimes\Sigma))=\tilde{p}_{o}(v_{0}\otimes d_{0}( \Sigma))=v^{\prime}\cdot(p_{n}\circ(d_{0}(\Sigma)))=v^{\prime}\cdot d_{0}( \Lambda),\]
where
\[v^{\prime}=(p_{*}(\xi_{e^{\prime}}))_{\#}(v_{0})=(p_{*}(\sigma)*p_{*}(\xi_{e}))_{ \#}(v_{0})=(p_{*}(\sigma))_{\#}(v).\]
But \(p\circ\lambda_{n}^{E}=\lambda_{n}^{G}\circ p_{n}\) so that \(p_{*}(\sigma)(\tau)=\lambda_{n}^{G}(\Lambda((1-\tau)a_{1}+\tau a_{0}))\). That is, \(\tilde{p}_{o}(d_{0}(v_{0}\otimes\Sigma))=\overline{d}_{0}(\tilde{p}_{o}(v_{0} \otimes\Sigma))\).
Secondly, observe that for \(0\leq k\leq n-1\) we have
\[\tilde{p}_{o}(\partial_{k}^{E}(v_{0}\otimes\Sigma)) = \tilde{p}_{o}(v_{0}\otimes(d_{k}^{n})^{E}\circ\Sigma)=v\cdot(p_{n-1 }\circ(d_{k}^{n})^{E}\circ\Sigma)\] \[= v\cdot((d_{k}^{n})^{G}\circ p_{n}\circ\Sigma)=v\cdot(d_{k}^{n})^ {G}\circ\Lambda=\partial_{k}^{G}(\tilde{p}_{o}(v_{0}\otimes\Sigma)).\]
We denote by \(e_{n}^{\Sigma}\) the arrow determined by the \(n\)-projection of \(\Sigma(a_{0})\in E^{(n)}\) onto \(E\rtimes G\). Observe that \(e^{\prime}=\lambda_{n-1}^{E}((d_{n}^{n})^{E}\circ\Sigma)(a_{0}))=s(e_{n-1}^{ \Sigma})=t(e_{n}^{\Sigma})\) which implies that \(\xi_{e^{\prime}}=(e_{n}^{\Sigma})^{-1}*\xi_{e}\). Thus,
\[\tilde{p}_{o}(\partial_{n}^{E}(v_{0}\otimes\Sigma))=\tilde{p}_{o}(v_{0} \otimes(d_{n}^{n})^{E}\circ(\Sigma))=v^{\prime}\cdot(p_{n-1}\circ((d_{n}^{n}) ^{E}\circ(\Sigma)))=v^{\prime}\cdot(d_{n}^{n})^{G}\circ\Lambda,\]
where
\[v^{\prime}=(p_{*}(\xi_{e^{\prime}}))_{\#}(v_{0})=(p_{*}((e_{n}^{\Sigma})^{-1}) *p_{*}(\xi_{e}))_{\#}(v_{0})=(p(e_{n}^{\Sigma})^{-1})_{\#}(v)=((g_{n}^{\Lambda })^{-1})_{\#}(v),\]
since \(p\circ\mathrm{pr}_{n}^{E}=\mathrm{pr}_{n}^{G}\circ p_{n}\). In consequence, \(\tilde{p}_{o}(\partial_{n}^{E}(v_{0}\otimes\Sigma))=\partial_{n}^{G}(\tilde{ p}_{o}(v_{0}\otimes\Sigma))\). This completes the proof.
_Remark 2.10_.: It is worth mentioning that similarly to how they were defined the total homology groups \(H^{\mathrm{tot}}_{\bullet}(G,\mathcal{L})\) it is possible to define total cohomology groups \(H^{\bullet}_{\mathrm{tot}}(G,\mathcal{L})\) as well as to prove a \(G\)-version of the Eilenberg cohomology isomorphism. This can be done by mimicking our approach together with the classical constructions, see [11, p. 14].
## 3. Closed basic \(1\)-forms
We say that a Lie groupoid \(G\rightrightarrows M\) is _proper_ if the source/target map \((s,t):G\to M\times M\) is proper. In this case the groupoid orbits \(\mathcal{O}_{x}\) are embedded in \(M\), the isotropy groups \(G_{x}\) are compact, and the orbit space \(M/G\) is Hausdorff, second-countable, and paracompact, visit [8]. Let \(G\rightrightarrows M\) be a proper groupoid and denote by \(X=M/G\) its corresponding orbit space. A differential form \(\omega\) on \(M\) is said to be _basic_ if \(s^{*}\omega=t^{*}\omega\), see [20, 25]. The set of basic forms will be denoted by \(\Omega^{\bullet}_{\mathrm{bas}}(G)\). It is clear that the de Rham exterior differential on \(\Omega^{\bullet}(M)\) restricts to \(\Omega^{\bullet}_{\mathrm{bas}}(G)\), thus yielding the so-called _basic cohomology_\(H^{\bullet}_{\mathrm{bas}}(G,\mathbb{R})\) of \(G\). Such a cohomology is Morita invariant and also satisfies that \(H^{\bullet}(X,\mathbb{R})\cong H^{\bullet}_{\mathrm{bas}}(G,\mathbb{R})\), where \(H^{\bullet}(X,\mathbb{R})\) denotes the singular cohomology of \(X\).
### \(\mathbf{G}\)-homomorphisms of periods
Let \(\omega\) be a closed basic \(1\)-form on \(G\) and let \(\xi\) denote the basic cohomology class \([\omega]\in H^{1}_{\mathrm{bas}}(G,\mathbb{R})\). We are interested in studying some features of \(\xi\) by defining its \(G\)-homomorphism of periods as well as its corresponding covering space. In order to do so we will make use of the topological ingredients described in the previous section. For each smooth \(G\)-path \(\sigma=\sigma_{n}g_{n}\sigma_{n-1}\cdots\sigma_{1}g_{1}\sigma_{0}\) in \(M\) we define the \(G\)-path integral
\[\int_{\sigma}\omega=\sum_{k=0}^{n}\int_{\sigma_{k}}\omega.\]
**Lemma 3.1**.: _Let \([\sigma]\) denote the \(G\)-homotopy class of \(\sigma\). Then the expression_
\[\int_{[\sigma]}\omega=\int_{\sigma}\omega,\]
_is well defined._
Proof.: Let us pick another \(G\)-path \(\sigma^{\prime}\) being \(G\)-homotopic to \(\sigma\). If \(\sigma\) and \(\sigma^{\prime}\) are equivalent then the assertion is trivial. Suppose then that there is a smooth \(G\)-homotopy
\(D_{n}(\tau,\cdot)d_{n}(\tau)\cdots d_{1}(\tau)D_{0}(\tau,\cdot)\) from \(\sigma\) to \(\sigma^{\prime}\) with fixed endpoints \(x\) and \(y\). It suffices to check that the expression
\[I_{\tau}=\int_{D(\tau,\cdot)}\omega=\sum_{k=0}^{n}\int_{0}^{1}\omega_{D_{k}(\tau,\nu)}(D_{k}^{\prime}(\tau,\nu))d\nu,\]
does not depend on \(\tau\) by differentiating it with respect to \(\tau\). However, this computation can be verified in local coordinates as in the classical case by using the fact that \(\omega\) is a closed basic 1-form and the following identities are satisfied: \(D_{k}(\tau,0)=t(d_{k}(\tau))\), \(D_{k-1}(\tau,1)=s(d_{k}(\tau))\) for all \(k=1,\cdots,n\) and \(D_{0}(\tau,0)=x\), \(D_{n}(\tau,1)=y\) for all \(\tau\in[0,1]\).
_Remark 3.2_.: Suppose that \(\omega\) and \(\omega^{\prime}\) are basic-cohomologous closed basic 1-forms. That is, there is a basic smooth function \(f:M\to\mathbb{R}\) such that \(\omega-\omega^{\prime}=df\). By arguing as in Lemma 3.5 below it is simple to check that \(\int_{\sigma}(\omega-\omega^{\prime})=\int_{\sigma}df=f(\sigma_{n}(1))-f( \sigma_{0}(0))\) since \(f\) is basic. In consequence, the expression \(\int_{[\sigma]}\xi=\int_{\sigma}\omega\) is also well defined when \(\sigma\) is a \(G\)-loop.
The first interesting consequence of the previous result is the following.
**Proposition 3.3**.: _If \(G\rightrightarrows M\) is simply \(G\)-connected then \(H^{1}_{\rm bas}(G,\mathbb{R})=0\)._
Proof.: Let \(\omega\) be a closed basic 1-form and define \(f(x)=\int_{\lambda_{x}}\omega\) where \(\lambda_{x}\) is any \(G\)-path joining \(x_{0}\) with \(x\). This function is smooth and well defined since \(\Pi_{1}(G,x_{0})\) is trivial. Let \(\gamma:[1,3]\to M\) be a smooth path such that \(\gamma(2)=x\) and \(\gamma^{\prime}(2)=X_{x}\in T_{x}M\). Let \(\lambda_{\gamma(1)}\) be a fixed \(G\)-path from \(x_{0}\) to \(\gamma(1)\) and consider the \(G\)-path \(\lambda_{\gamma(\tau)}=\gamma|_{[1,\tau]}1_{\gamma(1)}\lambda_{\gamma(1)}\) for \(1\leq\tau\leq 3\). Observe that \(f(\gamma(\tau))=f(\gamma(1))+\int_{1}^{\tau}\omega_{\gamma(\nu)}(\gamma^{ \prime}(\nu))d\nu\). Thus
\[df_{x}(X_{x})=\frac{d}{d\tau}\left(f(\gamma(1))+\int_{1}^{\tau}\omega_{\gamma( \nu)}(\gamma^{\prime}(\nu))d\nu\right)|_{\tau=2}=\frac{d}{d\tau}\left(\int_{1 }^{\tau}\omega_{\gamma(\nu)}(\gamma^{\prime}(\nu))d\nu\right)|_{\tau=2}=\omega _{x}(X_{x}).\]
Let us now check that \(f\) is basic. If \(g\in G\) then it follows that \(\lambda_{t(g)}=c_{t(g)}g\lambda_{s(g)}\), where \(c_{t(g)}\) denotes the constant path at \(t(g)\) and \(\lambda_{s(g)}\) is any \(G\)-path from \(x_{0}\) to \(s(g)\), is a \(G\)-path joining \(x_{0}\) with \(t(g)\). Note that by definition \(\int_{\lambda_{t(g)}}\omega=\int_{\lambda_{s(g)}}\omega\) since the \(G\)-path \(c_{t(g)}g\) does not contribute to the \(G\)-path integral of the left hand side. Hence, \(f(s(g))=f(t(g))\) as desired.
If \(\Pi_{1}(G,x_{0})\) is the fundamental group of \(G\) at the base-point \(x_{0}\in M\) then from Lemma 3.1 we get a well defined group homomorphism \(l_{\omega}:\Pi_{1}(G,x_{0})\to(\mathbb{R},+)\) by sending \([\sigma]\mapsto\int_{\sigma}\omega\). Since \(\mathbb{R}\) is abelian it follows that \(l_{\omega}\) factors through the Hurewicz \(G\)-homomorphism \(h\) from Proposition 2.2 by a uniquely determined group homomorphism \(\operatorname{Per}_{\xi}:H_{1}(G,\mathbb{Z})\to\mathbb{R}\) which only depends on the cohomology class \([\omega]\in H^{1}_{\rm bas}(G,\mathbb{R})\), see Remark 3.2. This will be called the \(G\)_-homomorphism of periods_ of \(\omega\).
Let \(d\theta\) denote the angle form on \(S^{1}\). Here \(\theta=\frac{1}{2\pi}\phi\) where \(\phi\) is the angle multi-valued function on \(S^{1}\). This is a closed 1-form without zeroes which can not be presented as the differential of a smooth function. The latter fact is consequence of the Stokes Theorem and the fact that \(\int_{S^{1}}d\theta=1\). It is clear that if \(f:M\to S^{1}\) is basic then \(f^{*}(d\theta)\) becomes a closed basic 1-form on \(M\). Let us characterize the closed basic 1-forms that can be obtained in this way.
**Proposition 3.4**.: _Let \(\omega\) be a closed basic 1-form on \(M\). Then \(\omega=f^{*}(d\theta)\) where \(f:M\to S^{1}\) is a smooth basic function if and only if the cohomology class \(\xi=[\omega]\in H^{1}_{\rm bas}(G,\mathbb{R})\) is integral, that is, \(\xi\in H^{1}_{\rm bas}(G,\mathbb{Z})=H^{1}_{\rm bas}(G,\mathbb{R})\cap H^{1}(M, \mathbb{Z})\)._
Proof.: We will mainly follow the classical proof of this result as in [11, p. 37]. Suppose that \(\omega=f^{*}(d\theta)\) with \(f:M\to S^{1}\) basic. Note that \(f\) induces a Lie groupoid morphism \(F:(G\rightrightarrows M)\to(S^{1}\rightrightarrows S^{1})\) where \(F\) is either given by \(s^{*}f\) or \(t^{*}f\). Therefore, if \(\sigma=\sigma_{n}g_{n}\sigma_{n-1}\cdots\sigma_{1}g_{1}\sigma_{0}\) is a \(G\)-loop then
\[f_{*}(\sigma)=f_{*}(\sigma_{n})1_{f_{*}(\sigma_{n})(0)}f_{*}(\sigma_{n-1}) \cdots f_{*}(\sigma_{1})1_{f_{*}(\sigma_{1})(0)}f_{*}(\sigma_{0}),\]
turns out to be equivalent to a usual loop on \(S^{1}\) (actually, we obtain a branch of loops formed by \(f_{*}(\sigma_{j})\) with \(j=0,1,\cdots,n\)). In consequence, the number
\[\int_{\sigma}\omega=\sum_{k=0}^{n}\int_{\sigma_{k}}f^{*}(d\theta)=\sum_{k=0}^{ n}\int_{f_{*}(\sigma_{k})}d\theta=\int_{f_{*}(\sigma)}d\theta\in\mathbb{Z},\]
is an integer since it agrees with the sum of the degrees of the loops \(f_{*}(\sigma_{j})\) on \(S^{1}\), which clearly computes the degree of the whole branch loop \(f_{*}(\sigma)\). Thus, we get that the \(G\)-homomorphism of periods of any closed basic \(1\)-form \(\omega=f^{*}(d\theta)\) with \(f:M\to S^{1}\) basic takes integral values so that its associated cohomology class lies inside \(H^{1}_{\mathrm{bas}}(G,\mathbb{Z})\).
Conversely, let us now suppose that all the \(G\)-periods associated to \(\xi\) are integral. Fix a base point \(x_{0}\in M\) and define \(f(x)=\exp\left(2\pi\sqrt{-1}\int_{\lambda_{x}}\omega\right)\), where \(\lambda_{x}\) is any \(G\)-path joining \(x_{0}\) with \(x\). Note that the definition of \(f\) does not depend on \(\lambda_{x}\) since if \(\lambda_{x}^{\prime}\) is another \(G\)-path from \(x_{0}\) to \(x\) then for the \(G\)-loop at \(x_{0}\) we get \(\sigma=(\lambda_{x}^{\prime})^{-1}*\lambda_{x}\) and \(\int_{\lambda_{x}}\omega-\int_{\lambda_{x}^{\prime}}\omega=\int_{\sigma}\omega \in\mathbb{Z}\). By performing similar computations as those in the proof of Proposition 3.3 it is simple to check that \(f\) is a smooth basic function satisfying \(\omega=f^{*}(d\theta)\).
It follows from Proposition 2.4 that we can consider a covering space \(p:M_{\xi}\to M\) over \(G\) which corresponds to the kernel of the \(G\)-homomorphism of periods \(\Pi_{1}(G,x_{0})\to\mathbb{R}\) of the cohomology class \(\xi\). Every \((M_{\xi}\rtimes G)\)-loop in \(M_{\xi}\) project by \(p\) to a \(G\)-loop in \(M\) with trivial periods with respect to \(\omega\). Therefore, it follows that the pullback basic \(1\)-form \(p^{*}\omega\) is basic exact. That is, there is a basic function \(f:M_{\xi}\to\mathbb{R}\) such that \(p^{*}\omega=df\). Recall that the free abelian group of equivariant covering transformations \(\Gamma^{G}(E)\) acts by the left on \(M_{\xi}\). Thus, the cohomology class \(\xi\) determines an injective group homomorphism \(\alpha_{\xi}:\Gamma^{G}(E)\to\mathbb{R}\) with image equal to the group of periods. Indeed, we define \(\alpha_{\xi}\) through the composition \(\mathrm{Per}_{\xi}\circ\beta^{-1}\) where \(\beta:\Gamma^{G}(E)\to\Pi_{1}(G,x_{0})\) as \(\beta(f)=[p_{*}(\tilde{\sigma})]\) is the isomorphism previously described. Therefore, for \(\varphi\in\Gamma^{G}(E)\) we choose \(x\in M_{\xi}\) and any \((M_{\xi}\rtimes G)\)-path \(\tilde{\sigma}\) from \(x\) to \(\varphi(x)\). Note that \(p_{*}(\tilde{\sigma})\) defines a \(G\)-loop at \(p(x)\), so that it also defines a homology class in \(|p_{*}(\tilde{\sigma})|\in H_{1}(G,\mathbb{Z})\). Hence, since \(H^{1}_{dR}(G)\cong\mathrm{Hom}(H_{1}(G,\mathbb{R}),\mathbb{R})\) and \(H^{1}_{\mathrm{bas}}(G,\mathbb{R})\hookrightarrow H^{1}_{dR}(G)\) it makes sense to set
\[\alpha_{\xi}(\varphi)=\langle\xi,|p_{*}(\tilde{\sigma})|\rangle=\int_{p_{*}( \tilde{\sigma})}\omega\in\mathbb{R}. \tag{1}\]
Such an expression does not depend on \(x\) nor \(\tilde{\sigma}\). It is clear that \(df=p^{*}\omega\) is invariant by the action of \(\Gamma^{G}(E)\) but \(f\) is not, in fact:
**Lemma 3.5**.: _The following formula holds true_
\[f(\varphi(x))=f(x)+\alpha_{\xi}(\varphi),\]
_for all \(x\in M_{\xi}\) and \(\varphi\in\Gamma^{G}(E)\)._
Proof.: Pick a \((M_{\xi}\rtimes G)\)-path \(\tilde{\sigma}=\tilde{\sigma}_{n}\tilde{g}_{n}\tilde{\sigma}_{n-1}\cdots \tilde{\sigma}_{1}\tilde{g}_{1}\tilde{\sigma}_{0}\) from \(x\) to \(\varphi(x)\). Then, since \(f\) is basic we get
\[\int_{\tilde{\sigma}}p^{*}\omega = \int_{\tilde{\sigma}}df=\sum_{k=0}^{n}(f(\tilde{\sigma_{k}}(1))-f( \tilde{\sigma_{k}}(0)))\] \[= f(\tilde{\sigma_{n}}(1))+\sum_{k=1}^{n}(f(t(\tilde{g}_{k}))-f(s( \tilde{g}_{k})))-f(\tilde{\sigma_{0}}(0))=f(\varphi(x))-f(x).\]
But \(\int_{\tilde{\sigma}}p^{*}\omega=\int_{p_{*}\tilde{\sigma}}\omega=\alpha_{ \xi}(\varphi)\), so that the formula follows.
### Closed basic 1-forms of Morse type
Let \(\omega\) be a closed basic 1-form on \(G\) and let \(\xi\) denote the basic cohomology class \([\omega]\in H^{1}_{\rm bas}(G,\mathbb{R})\). Note that \(\xi=0\) if and only if there exists a basic smooth function \(f:M\to\mathbb{R}\) such that \(\omega=df\). Therefore, in this case the Morse theoretical features of \(\omega\) on \(X\) are the same as those described by means of \(f\) in [19]. We will be mainly interested in studying the case \(\xi\neq 0\).
**Lemma 3.6**.: _The critical point set of \(\omega\in\Omega^{1}_{\rm bas}(G)\) is saturated in \(M\). In particular, if \(\omega_{1}=s^{*}\omega=t^{*}\omega\) then we have a topological subgroupoid \({\rm Crit}(\omega_{1})\rightrightarrows{\rm Crit}(\omega)\) of \(G\rightrightarrows M\)._
Proof.: This easily follows from the fact that both \(t\) and \(s\) are surjective submersions and \({\rm Crit}(\omega_{1})=s^{-1}{\rm Crit}(\omega)=t^{-1}{\rm Crit}(\omega)\).
If \(\omega\) is a closed basic 1-form then by the Poincare Lemma [20, Lem. 8.5] it follows that for each groupoid orbit \(\mathcal{O}\) there exists an open neighborhood \(\mathcal{O}\subset U\subset M\) and a basic smooth function \(f_{U}\in\Omega^{0}_{\rm bas}(G|_{U})\) such that \(\omega|_{U}=df\). If \(U\) is connected then the function \(f_{U}\) is determined by \(\omega|_{U}\) uniquely up to a constant. In particular, \({\rm Crit}(\omega|_{U})={\rm Crit}(f_{U})\). The _normal Hessian_ of \(\omega\) along a critical orbit \(\mathcal{O}\) is defined to the the normal Hessian of \(f_{U}\) along \(\mathcal{O}\). Thus:
**Definition 3.7**.: A critical orbit \(\mathcal{O}\) of \(\omega\) is said to be _nondegenerate_ if and only if \(f_{U}\) is nondegenerate in the sense of Bott. Accordingly, we say that \(\omega\) is _Morse_ if all of its critical orbits are nondegenerate.
The notion of Morse-Bott function is classical and was initially introduced by Bott in [2]. Our first key observation is that the previous notion is Morita invariant.
**Proposition 3.8**.: _Suppose that \(G\) an \(G^{\prime}\) are Morita equivalent Lie groupoids. If \(G^{\prime}\) admits a Morse closed basic \(1\)-form then so does \(G\)._
Proof.: Let \(P\) be a principal bi-bundle between \(G\) and \(G^{\prime}\) with anchor maps \(a_{l}:P\to M\) and \(a_{r}:P\to M^{\prime}\), see [8]. If \(\omega^{\prime}\) is a Morse closed basic \(1\)-form on \(G^{\prime}\) then there exists a unique closed basic \(1\)-form \(\omega\) on \(G\) such that \(a_{l}^{*}(\omega)=a_{r}^{*}(\omega^{\prime})\), see for instance [20, 25]. This establishes a correspondence between critical orbits since both \(a_{l}\) and \(a_{r}\) are surjective submersions, compare Lemma 5.11 in [21]. Furthermore, if \(\mathcal{O}^{\prime}\) and \(\mathcal{O}\) are related critical orbits then there are connected neighborhoods \(\mathcal{O}^{\prime}\subseteq U^{\prime}\subseteq M^{\prime}\) and \(\mathcal{O}\subseteq U\subseteq M\) together with basic functions \(f_{U^{\prime}}^{\prime}\) and \(f_{U}\) such that \(a_{l}^{*}(f_{U})=a_{r}^{*}(f_{U^{\prime}}^{\prime})\). Hence, if \(\mathcal{O}^{\prime}\) is nondegenerate then so is \(\mathcal{O}\) since both \(a_{l}\) and \(a_{r}\) are surjective submersions.
Observe that if \(\omega\) is a basic \(1\)-form on \(G\) then the expression \(\overline{\omega}([x])=\omega(x)\) for \([x]\in X\) is well defined. In particular, this fact allows us to define a _stacky closed \(1\)-form_ on the differentiable stack \([M/G]\) presented by \(G\rightrightarrows M\) as an element \(\overline{\omega}\) presented by a closed basic \(1\)-form \(\omega\) on \(G\). In consequence, we say that \([x]\) is a critical point of \(\overline{\omega}\) if and only if \(\mathcal{O}_{x}\) is a critical orbit of \(\omega\). Also, a critical point \([x]\) is nondegenerate if and only if \(\mathcal{O}_{x}\) is nondegenerate for \(\omega\).
**Definition 3.9**.: A stacky closed \(1\)-form \(\overline{\omega}\) on \([M/G]\) is _Morse_ if all of its critical points are nondegenerate. That is, it is presented by a Morse closed basic \(1\)-form on \(G\).
It is well known that if \(G\rightrightarrows M\) is a Lie groupoid then its _tangent groupoid_\(TG\rightrightarrows TM\) is obtained by applying the tangent functor to each of its structural maps. If \(\mathcal{O}_{x}\subset M\) is an orbit then we can restrict the groupoid structure to \(G_{\mathcal{O}_{x}}=s^{-1}(\mathcal{O}_{x})=t^{-1}(\mathcal{O}_{x})\), thus obtaining a Lie subgroupoid \(G_{\mathcal{O}_{x}}\rightrightarrows\mathcal{O}_{x}\) of \(G\rightrightarrows M\). Furthermore, the Lie groupoid structure of \(TG\rightrightarrows TM\) induces a Lie groupoid \(\nu(G_{\mathcal{O}_{x}})\rightrightarrows\nu(\mathcal{O}_{x})\) on the normal bundles, having the property that all of its structural maps are fiberwise isomorphisms. In particular, we have that \(\overline{dt}\circ\overline{ds}^{-1}:s^{*}\nu(\mathcal{O}_{x})\to t ^{*}\nu(\mathcal{O}_{x})\) defines a representation \((G_{\mathcal{O}_{x}}\rightrightarrows\mathcal{O}_{x})\curvearrowright(\nu( \mathcal{O}_{x})\to\mathcal{O}_{x})\). As a consequence, for every \(x\in M\) the isotropy group \(G_{x}\) has a canonical representation on the normal fiber \(\nu_{x}(\mathcal{O}_{x})\) called the _normal representation_ of \(G_{x}\) on the normal direction.
Let \(\mathcal{O}_{x}\) be a nondegenerate critical orbit of \(\omega\) and let \(\mathcal{O}_{x}\subset U\subset M\) and \(f_{U}:U\to\mathbb{R}\) respectively be an open neighborhood and basic smooth function such that \(\omega|_{U}=f_{U}\). Let us also fix a groupoid metric on \(G\rightrightarrows M\) in the sense of del Hoyo and Fernandes, visit [9]. Since the normal Hessian \(\mathrm{Hess}(f_{U})\) is nondegenerate it follows that by using the groupoid metric the normal bundle \(\nu(\mathcal{O}_{x})\) splits into the Whitney sum of two subbundles \(\nu_{-}(\mathcal{O}_{x})\oplus\nu_{+}(\mathcal{O}_{x})\) such that \(\mathrm{Hess}(f_{U})\) is strictly negative on \(\nu_{-}(\mathcal{O}_{x})\) and strictly positive on \(\nu_{+}(\mathcal{O}_{x})\). Let \(G_{x}\) be the isotropy group at \(x\). From [19] we know that \(\mathrm{Hess}(f_{U})\) is invariant with respect to the normal representation \(G_{x}\curvearrowright\nu(\mathcal{O}_{x})\) so that it preserves the splitting above since the normal representation is by isometries in this case. In consequence, we get a normal sub-representation \(G_{x}\curvearrowright\nu_{-}(\mathcal{O}_{x})\). The _stacky index_ of \([x]\) is defined to be \(\dim\nu_{-}(\mathcal{O}_{x})/G_{x}=\dim\nu_{-}(\mathcal{O}_{x})-\dim G_{x}\).
## 4. Novikov numbers for orbifolds
A Lie groupoid \(G\rightrightarrows M\) is said to be _etale_ if either \(s\) or \(t\) is a local diffeomorphism. From now on we assume that our Lie groupoid \(G\rightrightarrows M\) is etale and proper (i.e. it presents an orbifold) and that the orbit space \(X\) is compact. From [20] we know that \(X\) can be triangulated so that the basic cohomology \(H^{\bullet}_{\mathrm{bas}}(G,\mathbb{R})\) becomes a finite dimensional vector space. Moreover, in this specific case we also get that \(H^{\bullet}_{dR}(G)\cong H^{\bullet}_{\mathrm{bas}}(G,\mathbb{R})\), see [23]. Thus, \(H^{\bullet}_{dR}(G)\cong H^{\bullet}(X,\mathbb{R})\). In particular, it follows that we may identify the total singular homology \(H_{1}(G,\mathbb{Z})\) of \(G\) with the singular homology \(H_{\bullet}(X,\mathbb{Z})\) of \(X\). That is, we may assume from now on that \(H_{1}(G,\mathbb{Z})\) is a finitely generated \(\mathbb{Z}\)-module since \(X\) is compact.
Let \(\xi\in H^{1}_{\mathrm{bas}}(G,\mathbb{R})\) be the cohomology class of a closed basic \(1\)-form \(\omega\) on \(M\) and let \(\mathrm{Per}_{\xi}\) denote its corresponding \(G\)-homomorphism of periods. Observe that in the particular case of proper and etale Lie groupoids any group homomorphism \(H_{1}(G,\mathbb{Z})\to\mathbb{R}\) can be realized as the homomorphism of periods of a closed basic \(1\)-form. We define the _rank_ of the basic cohomology class \(\xi\) as the rank of the image of \(\mathrm{Per}_{\xi}\). This number will be denoted by \(\mathrm{rank}(\xi)\). Note that by arguing as in the proof Proposition 3.3 we may prove that \(\mathrm{rank}(\xi)=0\) if and only if there is a basic function \(f:M\to\mathbb{R}\) such that \(\omega=df\). That is, \(\mathrm{rank}(\xi)=0\) if and only if \(\xi=0\).
_Remark 4.1_.: If we consider a covering space \(p:M_{\xi}\to M\) over \(G\) which corresponds to the kernel of the \(G\)-homomorphism of periods then the expression (1) also shows that the rank of the group \(\Gamma^{G}(M_{\xi})\) equals the rank of the cohomology class \(\xi\).
**Proposition 4.2**.: _The set of classes in \(H^{1}_{\mathrm{bas}}(G,\mathbb{R})\) having rank \(1\) is dense._
Proof.: The proof of this result is similar to [11, Cor. 2.2; p. 38] when considering instead \(H^{1}_{\mathrm{bas}}(G,\mathbb{R})\) and \(H^{1}_{\mathrm{bas}}(G,\mathbb{Z})\)
We are now in conditions to define the Novikov Betti and torsion numbers associated to the basic cohomology class \(\xi\) by mimicking the three definitions introduced in [11, s. 1.5]. Let \(\Gamma\subset\mathbb{R}\) be an additive subgroup. The _Novikov ring_\(\mathbf{Nov}(\Gamma)\) consists of formal power series of the form \(x=\sum_{j=0}^{\infty}n_{j}\tau^{\gamma_{j}}\) where the coefficients are integers \(n_{j}\in\mathbb{Z}\) and the exponents \(\gamma_{j}\in\Gamma\) are real numbers forming a decreasing sequence converging to \(-\infty\), visit [11, c. 1] for further details concerning the ring structure as well as the properties of \(\mathbf{Nov}(\Gamma)\). When \(\Gamma=\mathbb{R}\) we denote \(\mathbf{Nov}(\mathbb{R})\) just by \(\mathbf{Nov}\). For any basic cohomology class \(\xi\in H^{1}_{\mathrm{bas}}(G,\mathbb{R})\) we may define a ring homomorphism \(\phi_{\xi}:\mathbb{Z}(\Pi_{1}(G,x_{0}))\to\mathbf{Nov}\) by setting \(\phi_{\xi}([\sigma]):=\tau^{\langle\xi,|\sigma|\rangle}\) for all \([\sigma]\in\Pi_{1}(G,x_{0})\). As consequence of Example 2.6 we get that \(\phi_{\xi}\) determines a local system \(\mathcal{L}_{\xi}\) of left \(\mathbf{Nov}\)-modules over \(G\rightrightarrows M\). The groups \(H^{\mathrm{tot}}_{j}(G,\mathcal{L}_{\xi})\) are called _Novikov homology groups_ of \(\xi\). It follows that the homology \(H^{\mathrm{tot}}_{j}(G,\mathcal{L}_{\xi})\) is a finitely generated module over the ring \(\mathbf{Nov}\). Since \(\mathbf{Nov}\) is a principal ideal domain we have that the module \(H^{\mathrm{tot}}_{j}(G,\mathcal{L}_{\xi})\) is a direct sum of a free submodule with a torsion submodule. The _Novikov Betti number_\(b_{j}(\xi)\) is defined to be the rank of the free summand of \(H^{\mathrm{tot}}_{j}(G,\mathcal{L}_{\xi})\) and the _Novikov torsion number_\(q_{j}(\xi)\) is defined to be the minimal number of generators of the torsion submodule of \(H^{\mathrm{tot}}_{j}(G,\mathcal{L}_{\xi})\).
_Remark 4.3_.: On the one hand, if \(G\rightrightarrows M\) is etale and proper and \(p:E\to M\) is a covering space over \(G\) (e.g. the universal covering from Example 2.3), then the action groupoid \(E\rtimes G\rightrightarrows E\) is etale and proper as well. That is, it also represents an orbifold. On the other hand, note that if \(\xi=0\) then \(\mathrm{Per}_{\xi}=0\) which implies that the homomorphism \(\phi_{\xi}\) takes values within the subring \(\mathbb{Z}\subset\mathbf{Nov}\), meaning that \(\mathcal{L}_{\xi}\) is trivial. Therefore, by looking at the groupoid homology with local coefficients as the groupoid equivariant homology described in Proposition 2.9 it follows that the total chain complex \(\mathbf{Nov}\otimes_{\mathbb{Z}(\Pi_{1}(G,x_{0}))}\bar{C}_{\bullet}(E)\) agrees in this case with \(\mathbf{Nov}\otimes_{\mathbb{Z}}\bar{C}_{\bullet}(G)\). In consequence, from [11, Lem. 1.12] we get that \(b_{j}(\xi)\) coincides with the Betti number \(\mathrm{rank}H_{j}(G,\mathbb{Z})\) and \(q_{j}(\xi)\) equals the minimal number of generators of the torsion subgroup of \(H_{j}(G,\mathbb{Z})\). But \(H_{j}(G,\mathbb{Z})\cong H_{j}(X,\mathbb{Z})\), so that we recover the corresponding numerical invariants of the orbit space \(X\).
There are other two possible ways to define the Novikov numbers associated to \(\xi\). Let \(\mathbb{Z}[\mathbf{Nov}(\Gamma)]\) be the group ring consisting just of finite sums \(x\) as above. This is a subring of \(\mathbf{Nov}(\Gamma)\). Let \(S\subset\mathbb{Z}[\mathbf{Nov}(\Gamma)]\) denote the subset consisting of the elements in \(\mathbb{Z}[\mathbf{Nov}(\Gamma)]\) with leading term \(1\), so that we have a canonical inclusion of the localized ring \(\mathcal{R}(\Gamma):=S^{-1}\mathbb{Z}[\mathbf{Nov}(\Gamma)]\) into the Novikov ring \(\mathbf{Nov}(\Gamma)\). The ring \(\mathcal{R}(\Gamma)\) will be called the _rational part_ of \(\mathbf{Nov}(\Gamma)\).
Firstly, consider the \(G\)-homomorphism of periods \(\mathrm{Per}_{\xi}:H_{1}(G,\mathbb{Z})\to\mathbb{R}\) and denote by \(\Gamma_{\xi}\) its image inside \(\mathbb{R}\). It follows that \(\Gamma_{\xi}\) is a finitely generated free abelian group. The class \(\xi\) determines a ring homomorphism \(\psi_{\xi}:\mathbb{Z}(\Pi_{1}(G,x_{0}))\to\mathcal{R}(\Gamma_{\xi})\) defined as \(\psi_{\xi}([\sigma]):=\tau^{\langle\xi,|\sigma|\rangle}\) for all \([\sigma]\in\Pi_{1}(G,x_{0})\). As above, the homomorphism \(\psi_{\xi}\) gives rise to a local system of left \(\mathcal{R}(\Gamma_{\xi})\)-modules \(\mathcal{M}_{\xi}\) over \(G\rightrightarrows M\) and the homology \(H^{\mathrm{tot}}_{j}(G,\mathcal{M}_{\xi})\) is a finitely generated module over the principal ideal \(\mathcal{R}(\Gamma_{\xi})\). From [11, Cor. 1.12] we obtain that \(\mathbf{Nov}(\Gamma_{\xi})\) is flat over \(\mathcal{R}(\Gamma_{\xi})\) so that we may get an isomorphism \(H^{\mathrm{tot}}_{j}(G,\mathcal{L}_{\xi})\cong\mathbf{Nov}(\Gamma_{\xi}) \otimes_{\mathcal{R}(\Gamma_{\xi})}H^{\mathrm{tot}}_{j}(G,\mathcal{M}_{\xi})\). This immediately implies that \(\mathrm{rank}H^{\mathrm{tot}}_{j}(G,\mathcal{M}_{\xi})\) equals \(b_{j}(\xi)\) and the minimal number of generators of the torsion submodule of \(H^{\mathrm{tot}}_{j}(G,\mathcal{M}_{\xi})\) agrees with \(q_{j}(\xi)\).
Secondly, consider the covering space \(p:M_{\xi}\to M\) over \(G\) which corresponds to the kernel of the \(G\)-homomorphism of periods \(\mathrm{Per}_{\xi}\), see Proposition 2.4. It is simple to see that a \(G\)-loop \(\sigma\) in \(M\) lifts to another \((M_{\xi}\rtimes G)\)-loop in \(M_{\xi}\) if and only if \(\mathrm{Per}_{\xi}(|\sigma|)=\langle\xi,|\sigma|\rangle=0\), where \(|\sigma|\in H_{1}(G,\mathbb{R})\) denotes the corresponding homology class of the \(G\)-loop \(\sigma\). Thus, by the isomorphism theorem it follows that the group of covering transformations \(\Gamma^{G}(M_{\xi})\) can be naturally identified
with \(L_{\xi}=H_{1}(G,\mathbb{Z})/\ker(\xi)\) and \(\operatorname{Per}_{\xi}\) yields an isomorphisms between the groups \(L_{\xi}\) and \(\Gamma_{\xi}\). Observe that after fixing a base for the free abelian group \(L_{\xi}\) we may identify the group ring \(\Lambda_{\xi}=\mathbb{Z}[L_{\xi}]\) with the ring of Laurent integral polynomials \(\mathbb{Z}[T_{1},\cdots,T_{r},T_{1}^{-1},\cdots,T_{r}^{-1}]\). Let us denote by \(w_{1},\cdots,w_{r}\) the weights of the variables \(T_{1},\cdots,T_{r}\) which are determined by the \(G\)-homomorphism of periods \(\operatorname{Per}_{\xi}:L_{\xi}\to\Gamma_{\xi}\). These weights are linearly independent over \(\mathbb{Z}\) so that we may define the weight of a monomial \(T_{1}^{n_{1}}\cdots T_{r}^{n_{r}}\) as \(\sum n_{j}w_{j}\). Denote by \(S_{\xi}\subset\Lambda_{\xi}\) the set consisting of the Laurent polynomials such that the monomial of maximal weight appearing in them has coefficient \(1\). This is a multiplicative subset and the localized ring \(S_{\xi}^{-1}\Lambda_{\xi}\) is a principal ideal domain since it is isomorphic to the rational subring \(\mathcal{R}(\Gamma_{\xi})\) of the Novikov ring, see [11, s. 1.3] for specific details.
Let \(p_{1}:E\to M\) be the universal covering over \(G\) (see Example 2.3) and let \(p_{2}:F\to M\) be the covering space over \(G\) corresponding to the kernel \(\ker(\xi)\subset H_{1}(G,\mathbb{Z})\), after mapping it to \(\Pi_{1}(G,x_{0})\) by using the Hurewicz \(G\)-homomorphism (see Proposition 2.4). These covering spaces give rise to the action groupoids \(E\rtimes G\rightrightarrows E\) and \(F\rtimes G\rightrightarrows F\) which are also etale and proper. By viewing at the groupoid homology with local coefficients in \(\mathcal{M}_{\xi}\) as the groupoid equivariant homology described in Proposition 2.9 we get isomorphisms among the total chain complexes
\[\mathcal{R}(\Gamma_{\xi})\otimes_{\mathbb{Z}(\Pi_{1}(G,x_{0}))}\tilde{C}_{ \bullet}(E)\cong S_{\xi}^{-1}\Lambda_{\xi}\otimes_{\Lambda_{\xi}}\tilde{C}_{ \bullet}(F)\cong S_{\xi}^{-1}\tilde{C}_{\bullet}(F).\]
It is important to notice that in the previous identifications we used the isomorphism \(\operatorname{Per}_{\xi}:L_{\xi}\to\Gamma_{\xi}\). As localization is an exact functor we obtain that \(H_{\bullet}^{\operatorname{tot}}(G,\mathcal{M}_{\xi})\cong S_{\xi}^{-1}H_{ \bullet}(F,\mathbb{Z})\). Therefore, the Novikov Betti number \(b_{j}(\xi)\) coincides with \(\operatorname{rank}\!H_{\bullet}(F,\mathbb{Z})\) and the Novikov torsion number \(q_{j}(\xi)\) equals the minimal number of generators of the torsion submodule of the \(S_{\xi}^{-1}\Lambda_{\xi}\)-submodule \(S_{\xi}^{-1}H_{\bullet}(F,\mathbb{Z})\).
_Remark 4.4_.: The reader probably already noticed that the definitions of the Novikov numbers in the context of orbifolds provided above became both natural and straightforward after having described the algebraic topology notions from Sections 2 and 3. It is left as an exercise to the reader to verify that similar results as those in Sections 1.5 and 1.6 from [11] may be adapted in our context without so many changes along the proofs. In particular, we have that if \(\xi_{1},\xi_{2}\in H_{\operatorname{bas}}^{1}(G,\mathbb{R})\) are two basic cohomology classes such that \(\ker(\xi_{1})=\ker(\xi_{2})\) then \(b_{j}(\xi_{1})=b_{j}(\xi_{2})\) for all \(j\). Also, \(q_{j}(\xi_{1})=q_{j}(\lambda\xi_{2})\) for all \(\lambda\in\mathbb{R}\) with \(\lambda>0\).
_Remark 4.5_.: Since the Lie groupoids we are working with above are all of them etale and proper it follows that after naturally adapting Corollaries 4.13 and 4.14 from [16, p. 224] to the homology case we may think of the total homologies \(H_{\bullet}^{\operatorname{tot}}(G,\mathcal{L}_{\xi})\), \(H_{\bullet}^{\operatorname{tot}}(G,\mathcal{M}_{\xi})\), \(H_{\bullet}(E,\mathbb{Z})\) and \(H_{\bullet}(F,\mathbb{Z})\) as being respectively identified with the usual homologies of the orbit spaces \(H_{\bullet}(X,\pi_{*}(\mathcal{L}_{\xi}))\), \(H_{\bullet}(X,\pi_{*}(\mathcal{M}_{\xi}))\), \(H_{\bullet}(E/E\rtimes G,\mathbb{Z})\) and \(H_{\bullet}(F/F\rtimes G,\mathbb{Z})\) where \(\pi:M\to X\) denotes the canonical orbit projection.
### Novikov inequalities
Let us now prove the Novikov inequalities for orbifolds. This can be done by following the strategy described in [11, s. 2.3] step by step. It is worth mentioning that the inequalities below depend at some point of the usual Morse inequalities for orbifolds which were already proven in [12] (see also [19]). Although the ideas of the proof are natural and straightforward adaptations of the classical ones, we will bring enough details in order to use most of the machinery introduced in the previous sections.
**Theorem 4.6**.: _Let \(G\rightrightarrows M\) be an etale and proper Lie groupoid such that the orbit space \(X\) is compact. Let \(\omega\) be a Morse closed basic \(1\)-form on \(M\). If \(c_{j}(\omega)\) denotes the number of critical
points in \(M/G\) having stacky Morse index \(j\) then_
\[c_{j}(\omega)\geq b_{j}(\xi)+q_{j}(\xi)+q_{j-1}(\xi), \tag{2}\]
_where \(\xi=[\omega]\in H^{1}_{\rm bas}(G,\mathbb{R})\) is the basic cohomology class of \(\omega\)._
Proof.: It suffices to prove the inequalities under the additional assumption that the basic cohomology class \(\xi\) is integral. That is, \(\xi\in H^{1}_{\rm bas}(G,\mathbb{Z})\). The latter requirement is equivalent to asking that \(\xi\) has rank 1. Indeed, basic cohomology classes \(\xi\) of rank 1 are real multiples of integral basic cohomology classes, namely, \(\xi=\lambda\xi_{0}\) where \(\xi\in H^{1}_{\rm bas}(G,\mathbb{Z})\) and \(\lambda\) is a nonzero real number. It is because in this specific case the image of the \(G\)-homomorphism of periods \({\rm Per}_{\xi}\) is a cyclic subgroup in \(\mathbb{R}\) so that all periods are integral multiples of a minimal period \(\lambda\in\mathbb{R}_{>0}\). In other words, the \(\lambda^{-1}\xi\) has all integral periods and it belongs to \(H^{1}_{\rm bas}(G,\mathbb{Z})\). Therefore, if \(\xi=\lambda\xi_{0}\) has rank 1 and \(\omega\) is a Morse closed basic 1-form in the class \(\xi\) then \(\omega_{0}=\lambda^{-1}\omega\) is another Morse closed basic 1-form in the class \(\xi_{0}\) having the same zeroes. In consequence, \(b_{j}(\xi)=b_{j}(\xi_{0})\) and \(q_{j}(\xi)=q_{j}(\xi_{0})\) by Remark 4.4.
Assume for a moment that the Novikov inequalities (2) hold true for basic cohomology classes of rank 1. The argument to prove that the previous assumption is enough to ensure that the inequalities hold true for every basic cohomology class of rank \(>1\) is similar to that in Lemma 2.5 from [11] after considering instead basic cohomology. We sketch the argument here for the sake of completeness as well as the role it plays in our proof. Suppose that \(\xi\in H^{1}_{\rm bas}(G,\mathbb{R})\) is a basic cohomology class of rank \(>1\) and let \(\omega\) be a Morse closed basic 1-form in \(\xi\). Let \(S\subset X\) denote the set of zeroes of \(\overline{\omega}\). This is a finite set since \(X\) is compact. Consider the vector subspace \(N_{\xi}=\{\eta\in H^{1}_{\rm bas}(G,\mathbb{R}):\eta|_{\ker(\xi)}=0\}\). By similar arguments as those used in the proof of Theorem 1.44 from [11] and by Proposition 4.2 it follows that there exists a sequence of rank 1 basic cohomology classes \(\xi_{n}\in N_{\xi}\) such that \(\xi_{n}\to\xi\) as \(n\) goes to \(\infty\), \(b_{j}(\xi_{n})=b_{j}(\xi)\) and \(q_{j}(\xi_{n})=q_{j}(\xi)\) for all \(j\) and \(n\). Let us fix a basis \(\eta_{1},\cdots,\eta_{r}\) of \(N_{\xi}\) whose elements are respectively represented by closed basic 1-forms \(\omega_{1},\cdots,\omega_{r}\). Note that since \(G\) is proper and \(\eta_{k}|_{\ker(\xi)}=0\) we may ensure that there are open neighborhoods \(U_{k}\subset M\) such that \(U_{k}/G_{U_{k}}\subset X\) are open neighborhoods of \(S\) and \(\overline{\omega}_{k}\) vanishes identically on \(U_{k}/G_{U_{k}}\) for all \(k=1,\cdots,r\). It is clear that we can rewrite \(\xi=\sum_{k=1}^{r}a_{k}\eta_{k}\) and \(\xi_{n}=\sum_{k=1}^{r}a_{k,n}\eta_{k}\) with \(a_{k},a_{n,k}\in\mathbb{R}\) such that \(a_{n,k}\to a_{k}\) as \(n\) goes to \(\infty\). Let us define \(\omega_{n}=\omega-\sum_{k=1}^{r}(a_{k}-a_{n,k})\omega_{k}\). This is a basic closed 1-form for which there is an open neighborhood \(U\subset M\) made out of the \(U_{k}^{\prime}s\) above such that \(S\subset U/G_{U}\) and \(\omega-\overline{\omega}_{n}\) vanishes identically on \(U/G_{U}\). Furthermore, for \(n\) large enough it follows that \(\omega_{n}\) has no critical points outside \(U\). That is, \(c_{j}(\omega_{n})=c_{j}(\omega)\) for any \(j\) provided that \(n\) is large enough. Note that from the defining formula above it follows that the basic cohomology class \([\omega_{n}]\) agrees with \(\xi_{n}\) and they have rank 1. In consequence, we get that if the Novikov inequalities hold true for \([\omega_{n}]\) then they must hold true also for \(\xi\).
Suppose that \(\xi\) has rank 1. Let us take a covering space \(p:M_{\xi}\to M\) over \(G\) which corresponds to the kernel of the \(G\)-homomorphism of periods \(H_{1}(G,\mathbb{Z})\to\mathbb{R}\) of the cohomology class \(\xi\), see Proposition 2.4. In this case \(M_{\xi}\) has an infinite cyclic group of equivariant covering transformations \(\Gamma^{G}(M_{\xi})\) whose generator is denoted by \(T\). We already know that the pullback \(p^{*}(\omega)\) is a closed basic-exact 1-form so that there exists a basic function \(f:M_{\xi}\to\mathbb{R}\), uniquely determined up to constant, such that \(p^{*}(\omega)=df\). From Lemma 3.5 it follows that \(f(Tx)-f(x)=c\) is a constant for all \(x\in M_{\xi}\). The number \(c\) equals the minimal period of \(\omega\) so that we may assume \(c=\pm 1\) in our case since \(\xi\) is integral. Assume that the generator \(T\) is chosen so that \(f(Tx)-f(x)=-1\) for all \(x\in M_{\xi}\). Otherwise, we may take \(T^{-1}\) instead of \(T\).
Let us consider the action groupoid \(M_{\xi}\rtimes G\rightrightarrows M_{\xi}\), the stacky function \(\overline{f}:[M_{\xi}/M_{\xi}\rtimes G]\to\mathbb{R}\) determined by \(f\) and the Lie groupoid morphism \(p:M_{\xi}\rtimes G\to G\) induced by the covering space \(M_{\xi}\to M\) over \(G\). We denote by \(X_{\xi}\) the orbit space \(M_{\xi}/M_{\xi}\rtimes G\) and by \(\overline{T}:X_{\xi}\to X_{\xi}\) and \(\overline{p}:X_{\xi}\to X\) the induced maps between the orbit spaces. On the one hand, observe that the formula \(\overline{f}(T[x])-\overline{f}([x])=-1\) holds true for all \([x]\in X_{\xi}\). On the other hand, the critical points of \(f\) are precisely the elements in \(p^{-1}(x)\) of the zeroes \(x\in M\) of \(\omega\) so that the critical points of the stacky function \(\overline{f}\) in \(X_{\xi}\) are given by the elements in \(\overline{p}^{-1}([x])\) of the zeroes \([x]\) of \(\overline{\omega}\) in \(X\). Pick a regular value \(b\in\mathbb{R}\) of \(\overline{f}\) and set \(V=\overline{f}^{-1}(b)\), \(N=\overline{f}^{-1}([b,b+1])\), and \(Y=\overline{f}^{-1}((-\infty,b+1])\). Note that the projection \(\overline{p}\) determines a one-to-one correspondence between the critical points of \(\overline{f}|_{N}\) and the zeroes of \(\overline{\omega}\). Furthermore, \(\overline{f}|_{N}:N\to[b,b+1]\) is a stacky Morse function since \(p_{\xi}\) is a local diffeomorphism and \(\omega\) is of Morse type. Therefore, \(c_{j}(\overline{f}|_{N})=c_{j}(\omega)\) for all \(j=0,1,2,\cdots\). The homeomorphism \(\overline{T}\) maps \(Y\) into itself. As \(X_{\xi}\) can be triangulated (see [20]), it follows that we may fix a triangulation of \(V\) which in turn induces another triangulation of \(\overline{T}^{-1}V\), thus obtaining a simplicial isomorphism \(\overline{T}:\overline{T}^{-1}V\to V\). Let us choose a triangulation of \(N\) in such a way \(V\) and \(\overline{T}^{-1}V\) are sub-complexes. So, after applying the homeomorphism \(\overline{T}\) we can get a triangulation of the whole \(Y\) so that \(\overline{T}:Y\to Y\) is represented by a simplicial map. In other words, we have obtained a chain complex \(\overline{C}_{\bullet}(Y)\) of simplicial chains which actually is a complex of finitely generated \(\mathbb{Z}[\overline{T}]\)-modules.
The standard Morse inequalities for orbifolds were proved in Theorem 7.11 from [12]. Hence, by mimicking the analysis of the Betti numbers \(b_{j}(\overline{C},\mathfrak{p})\) associated to different prime ideals \(\mathfrak{p}\subset\mathbb{Z}[\overline{T}]\), exactly to how it is done in the remaining part of the proof of Theorem 2.4 in [11], we can get the inequalities
\[\sum_{k=0}^{j}(-1)^{k}c_{j-k}(\omega)\geq q_{j}(\xi)+\sum_{k=0}^{j}(-1)^{k}b_{ j-k}(\xi),\]
which are slightly stronger than the Novikov inequalities we wanted to prove.
Let us quickly explain how to apply Theorem 4.6 to find a lower bound for the numbers of zeros of certain symplectic vector fields on symplectic orbifolds. Let \(G\rightrightarrows M\) be an etale and proper Lie groupoid with compact orbit space \(X\) and let \(\rho:A\to TM\) denote its Lie algebroid. The set of _basic vector fields_\(\mathfrak{X}_{\mathrm{bas}}(G)\) is by definition the quotient
\[\mathfrak{X}_{\mathrm{bas}}(G)=\frac{\{(v_{1},v_{0})\in\mathfrak{X}(G)\times \mathfrak{X}(M):ds(v_{1})=v_{0}\circ s,\ dt(v_{1})=v_{0}\circ t\}}{\{(v_{1},v_{0 }):ds(v_{1})=v_{0}\circ s,\ dt(v_{1})=v_{0}\circ t,\ v_{1}\in(\ker(ds)+\ker( dt))\}}.\]
That is, a basic vector field is not strictly a vector field, but a pair of equivalence classes of vector fields. It is simple to see that a basic vector field \(v=(v_{1},v_{0})\) is determined by its 2st component \(v_{0}\). By Proposition 5.3.12 from [13] we know that Morita equivalent etale a proper groupoids have isomorphic spaces of basic vector fields so that we may think of \(\mathfrak{X}_{\mathrm{bas}}(G)\) as the space of vector field on the orbifold presented by \(G\rightrightarrows M\). Note that if we identify the basic forms \(\Omega^{\bullet}_{\mathrm{bas}}(G)\) with the set of pairs \(\{(\theta_{1},\theta_{0})\in\Omega^{\bullet}(G)\times\Omega^{\bullet}(M):s^{ \ast}(\theta_{0})=\theta_{1}=t^{\ast}(\theta_{0})\}\) then we have contraction operations and Lie derivatives \(\iota:\mathfrak{X}_{\mathrm{bas}}(G)\times\Omega^{\bullet}_{\mathrm{bas}}(G) \to\Omega^{\bullet-1}_{\mathrm{bas}}(G)\) and \(\mathcal{L}:\mathfrak{X}_{\mathrm{bas}}(G)\times\Omega^{\bullet}_{\mathrm{bas} }(G)\to\Omega^{\bullet}_{\mathrm{bas}}(G)\) respectively defined by
\[\iota_{v}\theta=(\iota_{\vec{v_{1}}}\theta_{1},\iota_{\vec{v_{0}}}\theta_{0}) \quad\text{and}\quad\mathcal{L}_{v}\theta=(\mathcal{L}_{\vec{v_{1}}}\theta_{1 },\mathcal{L}_{\vec{v_{0}}}\theta_{0}),\]
where \((\vec{v_{1}},\vec{v_{0}})\in\mathfrak{X}(G)\times\mathfrak{X}(M)\) is a representative of \(v\). These expressions do not depend on the choice of \((\vec{v_{1}},\vec{v_{0}})\), see [13]. By following [13, 14] we have that a _symplectic form_ on \(G\rightrightarrows M\) is by
definition a closed basic 2-form \(\Omega\) on \(M\) which is _nondegenerate_ in the sense that \(\ker(\Omega)=\mathrm{im}(\rho)\). This nondegeneracy requirement implies that the contraction with a symplectic form \(\Omega\) induces a linear isomorphism \(\Omega^{\flat}:\mathfrak{X}_{\mathrm{bas}}(G)\to\Omega^{1}_{\mathrm{bas}}(G)\). Such a notion is also Morita invariant so that it yields a well defined notion of symplectic form over an orbifold. We say that a basic vector field \(v\) is _symplectic_ if \(\mathcal{L}_{\tilde{v_{0}}}\Omega=0\). Note that after using the Cartan formula for the Lie derivative the latter requirement is equivalent to asking that \(\omega=\iota_{\tilde{v_{0}}}\Omega\) is a closed basic 1-form. But we already know that if \(\omega\) is a closed 1-form then the formula \(\omega=\iota_{\tilde{v_{0}}}\Omega\) defines a basic vector field \(v\) which must be symplectic since \(\omega\) is closed. Therefore, it follows that there is a one-to-one correspondence between the closed basic 1-forms and basic symplectic vector fields.
Motivated by Proposition 2.6 in [14] we define the _critical point set of a basic vector field_\(v\) as the critical point set of \(\tilde{v_{0}}\) viewed as a section of the vector bundle \(TM/\mathrm{im}(\rho)\to M\). It is simple to check that such a definition does not depend on \(\tilde{v_{0}}\). Because of the nondegeneracy condition imposed over \(\Omega\) it holds automatically that the critical points of \(v\) and \(\omega=\iota_{\tilde{v_{0}}}\Omega\) agree. Hence, on symplectic orbifolds, the problem of estimating from below the number of zeros of closed basic 1-forms is equivalent to finding a lower bound for the numbers of zeros of basic symplectic vector fields. In consequence, the natural generalization of the Novikov theory we have developed in this paper provides a tool for using topological methods to study zeros of symplectic vector fields on symplectic orbifolds. Furthermore, it also opens new research directions for many important physical models which can be described by the Hamiltonian formalism over orbifolds allowing closed basic 1-forms as their Hamiltonians. This can be done in the same spirit that it was studied by Novikov in [18].
| orbifoldの閉1形式の基本 cohomology
class に関連付けられる Novikov
numbers の自然な拡張を証明することで、
コンパクトな場合の Novikov 不等式を証明しました. |
2310.09694 | ADAPT-QAOA with a classically inspired initial state | Quantum computing may provide advantage in solving classical optimization
problems. One promising algorithm is the quantum approximate optimization
algorithm (QAOA). There have been many proposals for improving this algorithm,
such as using an initial state informed by classical approximation solutions. A
variation of QAOA called ADAPT-QAOA constructs the ansatz dynamically and can
speed up convergence. However, it faces the challenge of frequently converging
to excited states which correspond to local minima in the energy landscape,
limiting its performance. In this work, we propose to start ADAPT-QAOA with an
initial state inspired by a classical approximation algorithm. Through
numerical simulations we show that this new algorithm can reach the same
accuracy with fewer layers than the standard QAOA and the original ADAPT-QAOA.
It also appears to be less prone to the problem of converging to excited
states. | Vishvesha K. Sridhar, Yanzhu Chen, Bryan Gard, Edwin Barnes, Sophia E. Economou | 2023-10-15T01:12:12 | http://arxiv.org/abs/2310.09694v1 | # ADAPT-QAOA with a classically inspired initial state
###### Abstract
Quantum computing may provide advantage in solving classical optimization problems. One promising algorithm is the quantum approximate optimization algorithm (QAOA). There have been many proposals for improving this algorithm, such as using an initial state informed by classical approximation solutions. A variation of QAOA called ADAPT-QAOA constructs the ansatz dynamically and can speed up convergence. However, it faces the challenge of frequently converging to excited states which correspond to local minima in the energy landscape, limiting its performance. In this work, we propose to start ADAPT-QAOA with an initial state inspired by a classical approximation algorithm. Through numerical simulations we show that this new algorithm can reach the same accuracy with fewer layers than the standard QAOA and the original ADAPT-QAOA. It also appears to be less prone to the problem of converging to excited states.
## I Introduction
Quantum-classical hybrid algorithms have been designed to exploit the limited quantum resources at hand by leveraging classical computation. As an example, variational quantum eigensolvers have shown promise in solving problems governed by quantum mechanics, such as finding the ground state energy of molecules [1; 2]. In a similar way, classical optimization problems can be solved by mapping the solution to the ground state of a quantum Hamiltonian [3; 4; 5; 6; 7]. It is hoped that allowing the state to explore the Hilbert space will speed up convergence to the solution.
The prototypical algorithm for classical optimization problems is the quantum approximate optimization algorithm (QAOA), which prepares the solution as a parameterized quantum state [8; 9; 6]. One example of the problems QAOA is designed to solve is the MaxCut problem, where the goal is to maximize the sum of (possibly weighted) edges on a cut separating the vertices in a graph. This problem is NP-hard, where the classical Goemans-Williamson algorithm can obtain an approximation ratio (the cut value found by the algorithm divided by the true solution) of 0.878 for unweighted graphs, which is the highest guaranteed approximation ratio [10; 11]. As the number of layers in the ansatz approaches infinity, QAOA can reproduce the Trotterized version of an adiabatic evolution to the ground state, and the approximation ratio approaches 1. However, the performance is limited at a finite number of layers. With just one layer for example, for 3-regular unweighted graphs, the worst-case approximation ratio is 0.6942 [6].
On near-term quantum processors, the circuit depth is limited by decoherence, which prompts the development of algorithms utilizing shallower circuits. It has been shown that a dynamically constructed ansatz in the variational quantum eigensolver can reach the same accuracy with more compact quantum circuits [12; 13; 14]. Similarly, QAOA can benefit from the adaptive strategy in constructing the ansatz. In the algorithm called ADAPT-QAOA, the fixed mixer operator in the ansatz is replaced with an operator selected adaptively in between rounds of optimization [15]. One challenge in this approach is that the algorithm may find an excited state given the energy gradient criterion used for operator selection, which does not guarantee that the eigenstate the algorithm converges to is the ground state [16]. In the adaptive variational quantum eigensolver, this is less of an issue as the optimized state at any step stays close to the global optimum [17; 18].
Another strategy to further improve QAOA is to use an initial state inspired by classical optimization algorithms. There exist classical algorithms which first relax the rank constraint in the original problem to different extents and later construct a solution by first solving the relaxed problem [19; 10]. In Refs. [11; 20] the authors proposed mapping the solution to the relaxed problem to a quantum state, which is then used as the initial state of QAOA. This so-called warm start improves the performance of a QAOA ansatz with a small number of layers. Unlike the standard QAOA, the warm start eliminates the convergence guarantee since the initial state removes the resemblance to adiabatic evolution. This can be remedied by changing the mixer operator so that the initial state is the ground state of the new mixer operator [20; 21].
For large bounded-degree unweighted graphs, Cain et al. showed that QAOA with a constant depth is unlikely to improve the approximation ratio of a good (but not the optimal) initial state, presenting a challenge for warm starting QAOA [22]. Their analysis applies to QAOA with the original mixer operator and a good computational state as the initial state. In order to circumvent this issue, at least one of these conditions has to be violated.
In this work, we apply the warm-start strategy in ADAPT-QAOA. We adopt the technique of Refs. [11; 21] and construct the initial state based on the classical so
lution to a problem whose rank constraint is partially relaxed [19]. This initial state has a lower energy expectation value than the standard initial state \(\ket{+}^{\otimes n}\), and the standard ADAPT-QAOA procedure will optimize the variational parameters starting from the optimal state from the last layer. In this fashion the energy expectation value of the optimal state at each layer will stay close to the ground state energy. We numerically demonstrate the new algorithm, which we call warm-ADAPT-QAOA, on weighted and unweighted regular graphs, and show that it outperforms the standard QAOA and the original ADAPT-QAOA in terms of number of ansatz layers and robustness of finding the ground state. We also observe that the approximation ratio of the initial state can be significantly improved by the ansatz.
The manuscript is organized as follows. In Sec. II we review both ADAPT-QAOA and the warm-start approach in QAOA, followed by a summary of warm-ADAPT-QAOA. We then demonstrate its performance with numerical simulations for weighted and unweighted graphs in Sec. III. In Sec. IV, we show that the warm-start approach offers improvement for ADAPT-QAOA through analysis of the first step of the two algorithms. We summarize the findings and discuss some open questions in Sec. V.
## II Description of the algorithm
### Review of ADAPT-QAOA
Many classical optimization problems can be mapped to finding the ground state of an Ising Hamiltonian [6]. In this work we focus on the MaxCut problem, where for a \(n\)-vertex graph the objective function to be maximized is
\[F=\frac{1}{2}\sum_{\langle jk\rangle}w_{\langle jk\rangle}(1-x_{j}x_{k}), \tag{1}\]
where \(w_{\langle jk\rangle}\) is the weight of edge \(\langle jk\rangle\), and \(x_{j}\in\{\pm 1\}\) is the binary variable on the \(j\)-th vertex. The corresponding Hamiltonian whose energy is to be minimized is
\[C=-\frac{1}{2}\sum_{\langle jk\rangle}w_{\langle jk\rangle}(I-Z_{j}Z_{k}), \tag{2}\]
where \(Z_{j}\) is the Pauli \(Z\) operator for the \(j\)-th qubit. The standard QAOA approximates the ground state with a parameterized ansatz, which is similar to the Trotterized version of the adiabatic evolution [6],
\[\ket{\psi}=\prod_{i=1}^{p}e^{-i\beta_{i}M}e^{-i\gamma_{i}C}\ket{+}^{\otimes n}, \tag{3}\]
where \(p\) is the number of layers and
\[M=\sum_{i=1}^{n}X_{i}. \tag{4}\]
This operator is known as the mixer.
Retaining the alternating structure, ADAPT-QAOA replaces the fixed mixer with an operator selected for each layer. It calculates the energy gradient with respect to the new variational parameter for each candidate operator and chooses the one with the largest gradient magnitude [15]. For a Pauli operator \(A\) as the candidate mixer, the gradient is given by
\[\bra{\psi}e^{i\gamma_{0}C}i[C,A]e^{-i\gamma_{0}C}\ket{\Psi}\] \[=2\Re\epsilon\left(\bra{\psi}e^{i\gamma_{0}C}iCAe^{-i\gamma_{0}C} \ket{\Psi}\right), \tag{5}\]
where \(\ket{\Psi}\) is the current optimal state, \(A\) is taken from a pre-selected operator pool and \(\gamma_{0}\) is a small finite value [15]. All the variational parameters in the new ansatz are subsequently optimized, producing a new optimal state. It is worth mentioning that the optimization at the \(k\)-th layer starts by initializing the new parameters \(\beta_{k}\) at \(0\) and \(\gamma_{k}\) at \(\gamma_{0}\). By adaptively choosing the mixer at each layer from a pool containing two-qubit Pauli operators, ADAPT-QAOA is able to converge to the ground state of the cost function with lower circuit depths and fewer CNOT gates [15].
### Review of warm start
After collecting the set of \(n\) binary variables into one \(n\)-bit variable \(x=\{x_{j}\}\in\{\pm 1\}^{n}\), the MaxCut objective function in Eq. (1) is written as
\[F=\frac{1}{2}W+\frac{1}{4}\text{Tr}(-A^{\text{T}}xx^{\text{T}}), \tag{6}\]
where \(W=\sum_{\langle jk\rangle}w_{\langle jk\rangle}\) and \(A\) is the adjacency matrix of the graph. The \(n\times n\) matrix \(Y=xx^{\text{T}}\) is positive-semidefinite, rank-1, with diagonal entries equal to 1 [11]. Relaxing the rank constraint on \(Y\) converts the problem to a semidefinite program [10]. Since \(Y\) is positive-semidefinite, one can perform a Cholesky decomposition \(Y=X^{\text{T}}X\), where each column of \(X\) can be identified as a \(n\)-dimensional vector \(\vec{v}_{j}\) associated with the \(j\)-th vertex. Each \(\vec{v}_{j}\) has norm 1 since diagonal entries of \(Y\) are 1. Restoring the rank constraint corresponds to setting the first component of each \(\vec{v}_{j}\) (i.e. \(X_{1j}\)) to \(\pm 1\) while leaving all other components 0. Burer and Monteiro rewrote the relaxed problem as
\[\text{maximize Tr}(-A^{\text{T}}X^{\text{T}}X),\] \[\text{subject to }|\vec{v}_{j}|=1\,\forall j,\] \[\vec{v}_{j}\in\mathbb{R}^{n}\,\forall j, \tag{7}\]
where \(\vec{v}_{j}\) is the \(j\)-th column of \(X\)[19]. They further proposed modifying the constraint \(\vec{v}_{j}\in\mathbb{R}^{n}\) to \(\vec{v}_{j}\in\mathbb{R}^{k}\) for some \(k<n\), known as a rank-\(k\) formulation [19]. The original MaxCut problem corresponds to \(k=1\) while the \(k=n\) relaxation is a semidefinite program. The solution of a relaxed problem \(X\) gives a series of \(k\)-dimensional unit vectors. From this, one can use a hyperplane to produce a cut configuration [10; 19; 11].
Alternatively, the rank-2 or rank-3 Burer-Monteiro solution provides a heuristic initial state for QAOA, known as the warm start, which can improve the performance for the first few layers [11; 21]. Here each of the 2-dimensional or 3-dimensional vectors is interpreted as a quantum state on the Bloch sphere. Since a simultaneous rotation on all the vectors has no influence on the objective in the Burer-Monteiro formulation, such a rotation is performed so that one of the \(n\) vectors lands at the north pole of the Bloch sphere. From the \(n\) quantum states produced this way, the one with the lowest energy is chosen as the initial state.
### Warm-ADAPT-QAOA
Following Ref. [11], we adopt the rank-3 Burer-Monteiro solution and map the simultaneously rotated vectors to a quantum state that will serve as the initial state for ADAPT-QAOA. For an \(n\) vertex graph with edge weights \(\{w_{\langle jk\rangle}\}\), \(n\) random vectors \(\{\vec{v}_{j}\}\) are initialized on a 2-sphere. We then minimize the objective function
\[\tilde{F}=\sum_{\langle jk\rangle}w_{\langle jk\rangle}\vec{v}_{j}\cdot\vec{v }_{k}, \tag{8}\]
by performing stochastic gradient descent on the vectors. All the vectors in the solution are rotated simultaneously according to the criterion mentioned above. For ADAPT-QAOA, we take the following mixer operator pool \(\{\sum_{i=1}^{n}X_{i}\}\cup\{X_{i},Y_{i}\}_{i=1,\ldots n}\cup\{X_{j}Y_{k},X_{j }Z_{k},Y_{j}Z_{k},X_{j}X_{k},Y_{j}Y_{k},Z_{j}Z_{k}\}_{j,k=1,\ldots,n,j\neq k}\). In addition to the cost Hamiltonian, the two-qubit operators can provide extra entangling operations if selected. In Ref. [21], the resemblance to adiabatic evolution is restored by tailoring the mixer operator to the initial state. In a similar way, one can add this adjusted mixer operator to the pool as a candidate, restoring the possibility of realizing adiabatic evolution in the infinite depth limit. The adjusted operator takes the form of a sum of single-qubit operators, for which the warm start initial state is the ground state. We denote this version of the algorithm as am-warm-ADAPT-QAOA, where the adjusted mixer is included in the ADAPT operator pool. In Fig. 1 we summarize the procedure of warm-ADAPT-QAOA.
## III Numerical simulation
Here, we study the performance of warm-ADAPT-QAOA and am-warm-ADAPT-QAOA. The Nelder-Mead optimization method is used to classically optimize the variational parameters \(\vec{\gamma}\) and \(\vec{\beta}\). We initialize \(\gamma\) at \(\gamma_{0}=0.01\) and \(\beta\) at \(0\), due to \(\gamma=0\) being a saddle point of the cost function [15]. The algorithm is tested on \(n\)-qubit weighted and unweighted random regular graph instances of degree \(D\). The weights are drawn from the uniform distribution between \(0\) and \(1\). The true ground state energy is calculated exactly and compared to the energy output by the algorithms. The energy error is normalized to the true ground state energy. We compare the performance to that of the standard QAOA, the standard QAOA with warm start and a standard mixer, the standard QAOA with warm start along with the adjusted mixer (denoted as am-QAOA warm start), and ADAPT-QAOA. Since it is not our focus to find the best optimizer, we ran each algorithm only once for a given graph, instead of finding the best parameters from multiple optimizers and parameter initialization schemes. We note that this does not guarantee the global energy minimum for a given ansatz is found.
### Evidence for Improvement
In Fig. 2 we show the energy error for each layer of the ansatz for \(n=6,D=3\), \(n=8,D=5\), and \(n=10,D=7\) graphs for four different algorithms. We observe that warm-ADAPT-QAOA performs better than the other three algorithms for \(n\geq 8\), but not for \(n=6\), where ADAPT-QAOA rapidly converges to the ground state. Both warm start algorithms are closer to the true ground state at \(p=1\), but the standard QAOA with warm start and the standard mixer does not significantly improve the energy error from there, corroborating the results of Ref. [22]. On the other hand, warm-ADAPT-QAOA continues to reduce the energy at roughly the same rate as ADAPT-QAOA and the standard QAOA at early layers. For larger graphs, warm-ADAPT-QAOA converges even faster than ADAPT-QAOA at later layers. This suggests that warm-ADAPT-QAOA is less prone to the difficulty in further optimization encountered by the standard QAOA with warm start and the standard mixer. We also find that compared to warm-ADAPT-QAOA, am-warm-ADAPT-QAOA performs either similarly or slightly better.
In Fig. 3a, we present the proportion of graph instances that reach an energy error of \(1\%\) within \(15\) layers. This again shows that warm-ADAPT-QAOA quickly converges to the solution more often for larger graphs, while ADAPT-QAOA slightly outperforms the warm start for smaller graphs. We further analyze this claim by calculating how much the energy is lowered from \(p=0\) (i.e. the reference state) to \(p=15\) for both algorithms,
defined as \(1-\frac{\langle C\rangle_{p=15}-C_{\min}}{\langle C\rangle_{p=0}^{p}-C_{\min}}\) where \(C_{\min}\) is the exact minimum of the cost function. This is plotted in Fig. 3b. Similarly, for larger graphs, warm-ADAPT-QAOA reaches a much lower energy than ADAPT-QAOA over the course of 15 layers. This indicates that using a warm start is advantageous for larger and more complicated graphs, and may be so in the regime where classical algorithms become inefficient and quantum algorithms can provide speedup.
An important measure of resource cost for near-term quantum algorithms is the number of entangling gates required to obtain a certain accuracy, as these gates are generally more noisy than single-qubit gates. We estimate the number of CNOT gates in the circuit by writing each entangling operation in the ansatz as a combination of two CNOT gates and single qubit gates. In Fig. 4 we show the average number of CNOT gates required by each algorithm to reach an energy error of 0.01. It should be noted that this is an upper bound, as further transpilation may reduce the number of CNOT gates. For \(n\geq 8\), warm-ADAPT-QAOA uses significantly fewer CNOT gates than ADAPT-QAOA while for \(n=6\), it uses slightly more. Most of this improvement is due to the warm start algorithm reaching the energy error threshold within fewer layers. Additionally, we note that this average only includes the instances that reach the threshold within 15 layers, when most of the ADAPT-QAOA instances do not achieve this. The plot is a vast underestimate for ADAPT-QAOA, as most instances would require far more layers to achieve the threshold.
### Dependence on the quality of the warm start
The warm-start approach can help ADAPT-QAOA circumvent the issue of converging to an excited state by initializing close to the true ground state. However, since
the rank-3 Burer-Monteiro relaxation is not a semidefinite program, it is not guaranteed that the solution found by gradient descent is the global minimum of the relaxed objective function, or that the corresponding quantum state is close to the ground state. We therefore analyze how the quality of the warm start affects the performance of warm-ADAPT-QAOA. We quantify the quality of the initial state \(|\psi\rangle\) by its overlap \(|\langle G|\psi\rangle|\) with the true ground state \(|G\rangle\).
For thirty \(n=8\), \(D=5\) weighted graph instances, we find that \(63.3\%\) have \(|G\rangle\) as the basis state with the largest component in \(|\psi\rangle\), and \(80\%\) of fifteen \(n=10,D=5\) instances also have this property. This indicates that the initial state is usually of high quality.
From Fig. 5 we can see a very weak correlation between the quality of the initial state and the energy error reduction from p = 0 (i.e. the reference state) to p = 15. Although a better initial state seems more likely to give better performance, the algorithm can still significantly improve a relatively poor initial state.
## IV Analysis of the first step
### Parameter Landscape
To further analyze the difference between ADAPT-QAOA and warm-ADAPT-QAOA, we analyze the com
Figure 2: Energy error as a function of the number of layers for ADAPT-QAOA, warm-ADAPT-QAOA, the standard QAOA, the standard QAOA with warm start, am-warm-ADAPT-QAOA, and am-QAOA warm start averaged over weighted (a) and unweighted (b) regular graph instances. 40 instances are generated for \(n=6\), 20 instances for \(n=8\), and 10 instances for \(n=10\), where \(n\) is the number of vertices and \(D\) is the degree of the regular graph.
plete space of possible parameters at \(p=1\). We plot the energy error at every point in a grid of \(\gamma\) and \(\beta\) values for a random instance of an unweighted 5-regular graph, as in Fig. 6. We perform this analysis for \(n=6,8,10\). This indicates how difficult classical optimization is at \(p=1\).
We observe large differences between the parameter landscapes of warm-ADAPT-QAOA and the original ADAPT-QAOA. The standard-start landscape consists of peaks and valleys, while warm-ADAPT-QAOA creates a landscape of multiple ridges. This makes warm-ADAPT-QAOA less sensitive to the initial choice of parameters. In particular, since \(\beta\) and \(\gamma\) are initialized at 0 and a small value respectively, classical optimization starts from a trough.
As the graph size increases with the same connectivity, the ADAPT-QAOA landscape generally gets flatter with higher valleys and lower peaks, although for \(n=6,D=3\), this trend does not necessarily hold. For \(n=6,8,10\), the minimum energy errors are \(\frac{8}{9},\frac{7}{8},\frac{12}{13}\), respectively, and the average energy error is constant at 1. The warm start landscape also exhibits lower maxima as graph size increases. The minimum energy errors are \(10^{-6}\), \(0.249\), and \(0.145\) for \(n=6,8,10\). For the \(n=6\) graph, the warm start itself is very close to the solution. The average energy errors for the warm start are 0.5, 0.673, and 0.316, respectively. This indicates that warm-ADAPT-QAOA is less affected by an increase in the graph size than ADAPT-QAOA and may help explain why warm-ADAPT-QAOA performs better for larger graphs. Overall, the first layer parameter landscape of the warm-start algorithm is more favorable than that of standard ADAPT-QAOA.
### ADAPT-QAOA first step
Although ADAPT-QAOA can offer improvement over the standard QAOA for some number of layers, its first layer performance is typically worse. With the initial state \(\ket{+}^{\otimes n}\), the leading term in the gradient given by Eq. (5) is proportional to \(\gamma_{0}\) for any operator in the chosen pool except those of the form \(Y_{j}Z_{k}\). For \(Y_{j}Z_{k}\), the leading term in the gradient magnitude is \(w_{(jk)}\). With a sufficiently small \(\gamma_{0}\ll 1\) and the edge weights drawn between 0 and 1, the mixer operator selected at the first
Figure 4: Average number of CNOT gates required to reach an energy error of 0.01 for multiple weighted graph instances. The label \(n_{e}\) is the number of edges in the graph. 40 instances are generated for \(n=6\) graphs, 20 instances for \(n=8\) graphs, and 10 instances for \(n=10\) graphs.
Figure 5: Energy error reduction as a function of \(|\langle G|\psi\rangle|\) for 30 instances of \(n=8,D=5\) weighted graphs and 15 instances of \(n=10,D=5\) weighted graphs.
Figure 3: a) Proportion of graph instances that reach an energy error of 0.01 within 15 layers. b) Average energy reduction from \(p=0\) to \(p=15\). 40 instances are generated for \(n=6\) graphs, 20 instances for \(n=8\) graphs, 10 instances for \(n=10\) graphs. All graphs are weighted.
layer is most likely to be \(Y_{j}Z_{k}\) with the largest \(w_{\langle jk\rangle}\). For an unweighted graph whose edge weights are all set to 1, whether a \(Y_{j}Z_{k}\) with a connected edge \(\langle jk\rangle\) or another operator gets selected depends on the size of the graph as well as the value of \(\gamma_{0}\).
We can thus analyze the behavior of ADAPT-QAOA with \(p=1\) in some special cases. We take the unweighted regular graphs as simple examples. After numerically optimizing the parameters, we obtain the maximum cut value \(\frac{nD+2}{4}\) at \(\gamma\) near 0 and \(\beta\) near \(\frac{\pi}{4}\). For rings where \(D=2\), the magnitude of the largest cut ADAPT-QAOA can find at \(p=1\) is \(\frac{n+1}{2}\). It was shown that standard QAOA can find a cut of size \(\frac{3n}{4}\)[6]. This is larger than the ADAPT-QAOA cut for all \(n>1\). For 3-regular graphs, the minimum \(p=1\) approximation ratio of the standard QAOA is \(0.6924\) and the largest possible MaxCut value for any graph is \(\frac{3n}{2}\)[6]. The approximation ratio of \(p=1\) ADAPT-QAOA for such graphs is \(\frac{3n+2}{6n}\), lower than that of the standard QAOA for all \(n>1\).
### warm-ADAPT first step
With different possible initial states, warm-ADAPT-QAOA does not always choose the same operator in the first layer, so it is more difficult to analyze its performance. Here, we numerically examine the first layer of 75 unweighted regular graph instances of various sizes and connectivities. In Fig. 7, we observe that even over 75 instances, ADAPT-QAOA never returns a higher cut value than warm-ADAPT-QAOA at the first layer. This allows us to reasonably conclude that warm-ADAPT-QAOA will always outperform ADAPT-QAOA at the first layer. For lower connectivities, the median energy is much higher than the minimum energy, as shown in Fig. 8. This means that for a majority of instances, warm-ADAPT-QAOA will return a cut whose magnitude far exceeds that of the cut returned by ADAPT-QAOA. For \(D=3\), the largest cut value out of the 75 instances is \(\frac{3n}{2}\) to within 8 decimal places, which means that warm-ADAPT-QAOA is able to return the largest cut for a
Figure 6: Parameter landscape for different unweighted ADAPT landscapes are shown in top row, warm-ADAPT landscapes shown in bottom row.
Figure 7: Minimum cut values, returned by warm-ADAPT-QAOA out of 75 unweighted regular graph instances, are shown in solid lines. Dashed lines indicate the maximum cut values that ADAPT-QAOA can return.
3-regular graph within one layer.
## V Conclusion and Discussion
We propose a variation of ADAPT-QAOA that starts with an initial state inspired by a classical approximation algorithm. We numerically simulate its performance and show that on average warm-ADAPT-QAOA can reach a better accuracy than ADAPT-QAOA with the same number of layers in the ansatz. Consequently, it requires fewer resources. Since the initial state typically has a lower energy than the state \(\ket{+}^{\otimes n}\), it may be unsurprising that this new algorithm significantly outperforms ADAPT-QAOA at the first layer. However, in subsequent layers, it can provide a significant reduction in energy, contrary to the challenge observed in the standard QAOA with warm start [22]. This indicates that ADAPT-QAOA, with its problem-tailored ansatz, is more compatible with the warm-start approach.
We also see that adding the mixer operator adjusted to the warm-start initial state to the operator pool may improve the performance further. In the standard QAOA with warm start, the adjusted mixer plays the role of recovering the resemblance to adiabatic evolution. The connection between ADAPT-QAOA and the shortcut to adiabaticity has been discussed in Ref. [15]. One can speculate that by including the adjusted mixer in the operator pool, ADAPT-QAOA could approach the shortcut to adiabaticity with the warm-start initial state better. While in this work we primarily focus on the effect of a warm start on ADAPT-QAOA, without extensively studying the role of the adjusted mixer operator, we believe this is an interesting direction for future work, which can shed light on how to choose the initial state and the operator pool in a compatible and optimal way.
In the simulations, we observe that by starting from a state with a significant overlap with the ground state, the new approach appears to circumvent the issue of converging to excited states. It remains an open question how our new approach scales to larger problem sizes, and whether it provides some robustness against local minima of the energy landscape.
###### Acknowledgements.
S. E. E. acknowledges support from the US Department of Energy (Award No. DE-SC0019318). E. B. acknowledges support from the US Department of Energy (Award No. DE-SC0019199).
Figure 8: Median energy expectation value returned by warm-ADAPT-QAOA out of 75 unweighted regular graph instances of various sizes and connectivities shown in solid line, median exact maximum energy for the same graphs shown in dotted line. The shaded area indicates values between the maximum and minimum energy expectations values returned by warm-ADAPT-QAOA from the 75 instances. The constant term in Eq. 2 is dropped here. | 量子計算は、古典的な最適化問題を解決する際に有利になる可能性があります。Promisingなアルゴリズムの1つには量子近似最適化アルゴリズム(QAOA)があります。このアルゴリズムを改善するために、多くの提案があります。例えば、古典的な近似解に基づいた初期状態を使用するなどです。QAOAの一種であるADAPT-QAOAは、ア-ザイズをダイナミックに構築し、収束を高速化します。しかし、興奮状態に頻繁に収束するという課題があります。これは、エネルギーの地形における局所的な最小値に対応しています。この課題はパフォーマンスを制限しています。この研究では、ADAPT-QAOAに、古典的な近似アルゴリズムに基づいた初期状態を提案します。数値シミュレーションを通じて、この新しいアルゴリズムは、標準的なQAOAとADAPT-QAOAよりも少ない層で同じ精度 |
2308.13486 | On the Practicality of Dynamic Updates in Fast Searchable Encryption | Searchable encrypted (SE) indexing systems are a useful tool for utilizing
cloud services to store and manage sensitive information. However, much of the
work on SE systems to date has remained theoretical. In order to make them of
practical use, more work is needed to develop optimal protocols and working
models for them. This includes, in particular, the creation of a working update
model in order to maintain an encrypted index of a dynamic document set such as
an email inbox. I have created a working, real-world end-to-end SE
implementation that satisfies these needs, including the first empirical
performance evaluation of the dynamic SE update operation. In doing so, I show
a viable path to move from the theoretical concepts described by previous
researchers to a future production-worthy implementation and identify issues
for follow-on investigation. | Steven Willoughby | 2023-08-25T16:50:02 | http://arxiv.org/abs/2308.13486v1 | # On the Practicality of Dynamic Updates in Fast Searchable Encryption
###### Abstract
Searchable encrypted (SE) indexing systems are a useful tool for utilizing cloud services to store and manage sensitive information. However, much of the work on SE systems to date has remained theoretical. In order to make them of practical use, more work is needed to develop optimal protocols and working models for them. This includes, in particular, the creation of a working update model in order to maintain an encrypted index of a dynamic document set such as an email inbox. I have created a working, real-world end-to-end SE implementation that satisfies these needs, including the first empirical performance evaluation of the dynamic SE update operation. In doing so, I show a viable path to move from the theoretical concepts described by previous researchers to a future production-worthy implementation and identify issues for follow-on investigation.
## 1 Introduction
There are many situations and contexts wherein users of information systems need to collect, store, search, and retrieve large amounts of information. When the collection of data is large enough or needs to be available to multiple geographically-separated users, an attractive option may be to host the document repository on a cloud service provided by a third party.
While this allows the users to utilize the service's data centers and network connections to provide a robust platform to host their data, it opens a number of very serious security and privacy concerns if the data being hosted are in any way sensitive, since the hosting service may not necessarily be trusted to protect that information from their own personnel or others.
Consider, for example, an organization which uses such an externally-hosted searchable repository to manage confidential pre-release product design documentation, or financial information belonging to the organization. Worse, consider if the data were to contain personal information about employees or customers which would have expensive and disruptive effects on people's lives if it were to be leaked to unauthorized parties.
The obvious solution is to encrypt the data, so that they may be stored on the untrusted server in a form that cannot be understood by anyone but authorized personnel. This solves the problem of protecting the data at rest on the server. However, since the index must be decrypted in order to search within it, we must take one of two approaches: either provide decryption keys to the server in order to decrypt and search server-side, or download the entire index to the client for the decryption and search to be performed there. The former approach is not desirable because we have already established that the hosting provider may not be authorized to see the data nor trusted to protect it from unauthorized access. The latter is less than practical due to the amount of data which must be copied to users' local systems. These client systems may not have sufficient storage or processing power1 and the data may well be unreasonably large to transmit repeatedly--it may be hundreds of megabytes, gigabytes, or terabytes depending on the amount of indexed data.
Footnote 1: We must accept in these modern times that client computing platforms may well include cell phones and low-power notebooks in addition to more traditional computing platforms.
Ideally, we desire to have a method whereby the server can facilitate searches within an index of interesting keywords from the document repository, then report back with a list of documents containing the requested keywords (allowing the user to then retrieve those documents, locally decrypt them, and make use of their contents), all without the server having the ability to actually read the document index itself (since that provides a great deal of insight into the contents of each indexed document). In fact, the
server should not even be able to understand what keywords it was searching for (since that provides insight into the nature of the documents and what the users are looking for), or what documents were in the result list of each search.
While that may seem to be an impossible expectation, in reality we can find an acceptable middle ground which allows efficient server-side searching without divulging any direct information about the details of the search terms or results. The price paid for this, however, is that a determined hostile observer (perhaps the hosting provider themselves) could analyze patterns of input and output over time which will "leak" useful information from which some amount of the protected data may be inferred.
Building on the foundational work of previous researchers in this field, I have created a dynamic update capability which allows an SE index to accumulate new documents over time, whereas previous implementations were primarily focused on a one-time generation of an SE index for a static document set. I also moved beyond the previous theoretical treatments of this subject by adding empirical performance evaluation of my new update mechanism using a typical TCP/IP client-server architecture. Based on this work I identified some considerations for future optimization work.
## 2 Definitions and Nomenclature
In this paper I will use the terminology set out by Curtmola, et al. [1] which is also used by other authors, notably Demertzis and Papamanthou, [2] for the sake of consistency with established work on this topic. Basic notation and symbology is summarized in Table 1.
Central to this topic is the notion of a collection of documents for which the user wishes to maintain a searchable encrypted index. Following Curtmola, et al.'s nomenclature, let \(\Delta\) be a dictionary of all "interesting" words in all documents, i.e., \(\Delta=\{w_{1},\ldots,w_{d}\}\) where \(d\) is the number of unique interesting words. If \(2^{\Delta}\) is the power set of all possible documents containing words \(w\in\Delta\), then we will consider a set \(\mathcal{D}\subseteq 2^{\Delta}\) which is the specific set of \(n\) documents being indexed in some particular instance of searchable encrypted index being discussed.
Each such document has a unique identifier by which it can be fetched from its storage location. Let \(\mathsf{id}(D)\) be the identifier for some arbitrary document \(D\). Further, let \(\mathcal{D}(w)=\{\mathsf{id}(D)\ \forall D\in\mathcal{D}\mid w\in D\}\) be the set of unique identifiers for all documents in our indexed collection which contain the word \(w\). (Curtmola, et al. use the notation \(\mathbf{D}\) and \(\mathbf{D}(w)\) instead of \(\mathcal{D}\) and \(\mathcal{D}(w)\) respectively).
For my work which builds primarily on the work by Demertzis and Papamanthou, [2] I will also use the following nomenclature from their work: Let \(\lambda\) be the security parameter (in practical terms, the encryption key length in bits), such that each key \(k_{i}\) is generated using a cryptographically secure random number source, i.e., \(k_{i}\xleftarrow{\$}\{0,1\}^{\lambda}\).
Also let \(N\) be the number of entries stored in the SE, where _entry_ is a word which here means a unique tuple \((w,\mathsf{id}(D))\) mapping an indexed keyword \(w\) to the identifier of a document \(D\) containing that word. Thus, we have
\[N=\sum_{\forall w\in\Delta}\left|\mathcal{D}(w)\right|.\]
As we shall see, Demertzis and Papamanthou [2] posit a storage array arranged in tiered _levels_ of varying sized storage _buckets_.
Let \(\ell=\lceil\log_{2}N\rceil\) be the number of levels of index storage which would be employed in this model.
Let \(s\leq\ell\) be a configurable number of tiers which will actually be stored on the server (to save space since not all indexes will have values actually assigned to all possible levels), and \(\mathcal{L}\) be the set of \(s\) storage levels allocated for the SE.
This SE model supports the notion of _locality_ where data associated with the same keyword are placed in 1 or more co-located areas in the data store. Let \(L\) be the user-configurable locality such that specifying \(L>1\) allows each indexed term to be stored in multiple non-contiguous storage areas, facilitating parallelization of search operations within the index. These levels of storage are implemented in storage arrays \(A_{i}\) where \(i\in\mathcal{L}\). Each level is further partitioned into _buckets_. Bucket \(x\) of array \(A_{i}\) is denoted \(A_{i}[x]\).
I will refer to a few standard functions, as follows. Let \(\mathsf{F}\) be a pseudo-random function (prf) \(\mathsf{F}:\{0,1\}^{*}\times\{0,1\}^{*}\rightarrow\{0,1\}^{*}\), which emits a deterministic pattern of bits based on the values of its two inputs (key and data), but whose output is indistinguishable from random bits if those inputs are not known. Let \(\mathsf{Enc}\) and \(\mathsf{Dec}\) be \(\mathsf{cpa}\)-secure2 symmetric encryption functions \(\mathsf{Enc}:\{0,1\}^{*}\times\{0,1\}^{\lambda}\rightarrow\{0,1\}^{*}\) and \(\mathsf{Dec}:\{0,1\}^{*}\times\{0,1\}^{\lambda}\rightarrow\{0,1\}^{*}\) (such that \(\mathsf{Dec}=\mathsf{Enc}^{-1}\)) which take \(\lambda\)-bit keys to transform arbitrary-length bit strings to another arbitrary-length ciphertext and back again. Finally, let \(\mathsf{H}:\{0,1\}^{*}\rightarrow\{0,1\}^{b}\) be a cryptographically strong one-way hash function which outputs \(b\) bits of digest from its input data
of arbitrary length. This function must be collision resistant.
To the above notation I add the concept of the _order_ of an index, which gives us a useful way to organize a collection of various-size SE indexes. For this research, I chose to assume the order \(o\) of an index to be \(o=\ell=\lceil\log_{2}N\rceil\) with the intention that it would yield a reasonable pattern of varying sizes of indexes to avoid expensive large-index merge operations as long as reasonably possible.
## 3 Basic Principles of SE
Here, and throughout the rest of this paper, the term _client_ shall refer to the system a user of the SE system employs to initiate searches or add new documents to the SE index. It is a trusted system under the control of an authorized user. Encryption keys may be employed on it, and plain-text search terms and results may be known to it.
The term _server_ shall refer to the remote system on which the encrypted documents and the SE indexes are stored. This system is not allowed to see any decryption keys nor to see the plaintext search terms nor results.
The essential principle on which SE is based is that, given an index \(\mathcal{I}\) mapping a set \(\Delta\) of interesting keywords from a document repository \(\mathcal{D}\), we must represent \(\mathcal{I}\) in some opaque fashion such that it can be stored on an untrusted server without anyone being able to glean information about \(\mathcal{D}\) by examining \(\mathcal{I}\), even given an arbitrarily large amount of time to analyze \(\mathcal{I}\). This implies the use of a one-way cryptographically strong hash function, since that will provide a way to derive an opaque value to represent a value in the index without a reliable way to reverse the encoding function to obtain the original value again.
If we can then use the same hash function to encode the client-side search terms we can match them on the server to the encoded entries in \(\mathcal{I}\) without revealing the original search terms directly.
To illustrate this concept, consider a document repository which contains five documents, specifically, the first five volumes of Douglas Adams' magnum opus _The Hitchhiker's Guide to the Galaxy_. These volumes are separately encrypted and stored on the server. Each is assigned a document ID as shown in Table 2.
We identify a set \(\Delta\) of all
\begin{table}
\begin{tabular}{l l} \hline \hline
**Notation** & **Meaning** \\ \hline \(a\parallel b\) & Concatenation of strings \(a\) and \(b\) \\ \(|X|\) & Cardinality of set \(X\) \\ \(x\oplus y\) & Bitwise exclusive-or of \(x\) and \(y\) \\ \(x\xleftarrow{\$}X\) & Element \(x\) sampled uniformly from set \(X\) \\ \(x\leftarrow{\mathcal{A}}\) or \({\mathcal{A}}\to x\) & Output \(x\) from algorithm or function \({\mathcal{A}}\) \\ \(\Delta=\{w_{1},w_{2},\ldots,w_{d}\}\) & Dictionary of \(d\) words in an index \\ \({\mathcal{D}}=\{D_{1},D_{2},\ldots,D_{n}\}\) & Set of \(n\) documents whose words are indexed \\ \({\mathcal{D}}(w)\) & List of all documents containing word \(w\) \\ \(A_{i}[x]\) & Bucket \(x\) of level \(i\) in index storage array \(A\) \\ \(\lambda\) & Bit length of encryption keys \\ \(L\) & Locality of the index \\ \({\mathcal{L}}=\{i_{1},i_{2},\ldots,i_{s}\}\) & Set of \(s\) storage levels in use for the index \\ \(N\) & Number of stored \((w,\mathsf{id}(D))\) tuples in index \\ \(o\) & Order of a SE index, related to its storage capacity \\ \(s\) & Number of actually stored index levels \\ \(c\leftarrow\mathsf{Enc}(K,m)\) & Encryption function with key \(K\) and plaintext message \(m\) \\ \(m\leftarrow\mathsf{Dec}(K,c)\) & Decryption function with key \(K\) and ciphertext message \(c\) \\ \(y\leftarrow\mathsf{F}(K,x)\) & Pseudo-random function with key \(K\) and data \(x\) \\ \(x^{\prime}\leftarrow\mathsf{H}(x)\) & Collision-resistant hash function taking data \(x\) \\ \(\mathsf{id}(D)\) & Unique identifier for document \(D\) \\ \(\varepsilon\) & Empty string or unused storage location \\
000C & Hexadecimal values are shown in fixed-width type \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of Notation Used in This Paper
we find interesting for our purposes. Say, for example, \(\Delta=\{\)Arthur, dolphin, Fenchurch, hooloovoo, krikkit, Zaphod \(\}\). (Obviously, in a full production repository the list of interesting words would be orders of magnitude greater than this trivial example.) If we make a list of all documents in \(\mathcal{D}\) in which each of the words in \(\Delta\) appear, we find the following associations of each keyword \(w\in\Delta\) to a set of document IDs \(D(w)\):
\[\texttt{Arthur} \rightarrow\{3,5,8,12,15\}\] \[\texttt{dolphin} \rightarrow\{3,12\}\] \[\texttt{Fenchurch} \rightarrow\{12,15\}\] \[\texttt{hooloovoo} \rightarrow\{3\}\] \[\texttt{krikkit} \rightarrow\{8,12\}\] \[\texttt{Zaphod} \rightarrow\{3,5,8,12,15\}\]
From these associations we generate an index \(\mathcal{I}\) which is a collection of tuples \((w,\mathsf{id}(D))\). Specifically, we get: \((\texttt{Arthur},3)\), \((\texttt{Arthur},5)\), \((\texttt{Arthur},8)\), \((\texttt{Arthur},12)\), \((\texttt{Arthur},15)\), \((\texttt{dolphin},3)\), \((\texttt{dolphin},12)\), \((\texttt{Fenchurch},12)\), \((\texttt{Fenchurch},15)\), \((\texttt{hooloovoo},3)\), \((\texttt{krikkit},8)\), \((\texttt{krikkit},12)\), \((\texttt{Zaphod},3)\), \((\texttt{Zaphod},5)\), \((\texttt{Zaphod},8)\), \((\texttt{Zaphod},12)\), and \((\texttt{Zaphod},15)\).
We store \(\mathcal{I}\) on disk in two parts: a storage array which holds the actual tuples, and a hash table which associates each search term \(w\) with the location in storage holding its set of tuples. Setting aside for the moment the finer points of storage optimization so that we may focus just on the encryption aspect, let us visualize the storage arrangement of our index \(\mathcal{I}\) as shown in Figure 1.
With such a storage arrangement, if the client wishes to search for keyword \(w=\texttt{dolphin}\), the server looks that up in the hash table, finding that the tuples to satisfy the search are contained in storage array level \(1\), bucket \(0\) (which we will designate \(A_{1}[0]\)). Looking in that bucket, we find (among other things that happen to be stored there as well) the tuples \((\texttt{dolphin},3)\) and \((\texttt{dolphin},12)\). From this the server reports the result set \(\{3,12\}\) as the set of document IDs where the word "dolphin" is found.
### Encrypting the Index
To the above trivial storage arrange we now need to add a layer of encryption to obscure the meaning of the information in \(\mathcal{I}\) beyond the ability of the server to understand, but in such a way that the client can use it to get the same search results.
For this encryption, we generate a secret key known only to authorized clients. This key \(K=(k_{1},k_{2},k_{3})\) has three parts, each of which is created from \(\lambda\) random bits (i.e., \(k_{i}\xleftarrow{8}\{0,1\}^{\lambda}\)).
First, given a cryptographically strong one-way hash function \(\mathsf{H}\), pseudo-random function \(\mathsf{F}\), and encryption function \(\mathsf{Enc}\) as described above, we encode the tuples stored in the array \(A\) by encrypting the value \(\mathsf{id}(D)\parallel 0^{\lambda}\) using the encryption key \(\mathsf{F}(k_{3},w)\). In our example, assuming for simplicity that document IDs are \(16\) bits and \(\lambda=16\), the tuple \((\texttt{dolphin},3)\) is encoded by calculating \(\mathsf{Enc}(\mathsf{F}(k_{3},\texttt{dolphin}),0003000)\). Likewise, the tuple \((\texttt{dolphin},12)\) is encoded by calculating \(\mathsf{Enc}(\mathsf{F}(k_{3},\texttt{dolphin}),0000C000)\). Assuming these two calculations produce the hex values A462910E and 07B422A7, and that we carried out corresponding encodings with the other tuples, we would now have the encrypted storage array shown in Figure 2. Note that we also filled the empty storage locations with random bits to further obfuscate the index.
It is important to note that each tuple is encrypted with a key that is based on the search term to which it belongs, so the data there is only recoverable if one is in possession of that secret key \(k_{3}\) and the search term \(w\).
Now that the tuples are encoded, we must encrypt the hash table's keys and values in a similar fashion. The keys (the search terms) are simply replaced with the results of hashing them with another secret key: \(\mathsf{H}(\mathsf{F}(k_{1},w))\). Thus, search term "dolphin" would be replaced by \(\mathsf{H}(\mathsf{F}(k_{1},\texttt{dolphin}))\), say 38A9039C.
The value associated with "dolphin", is the tuple
\begin{table}
\begin{tabular}{r l} \hline \hline
**ID** & **Document Title** \\ \hline
3 & _The Hitchhiker’s Guide to the Galaxy_ \\
5 & _The Restaurant at the End of the Universe_ \\
8 & _Life, the Universe, and Everything_ \\
12 & _So Long, and Thanks for All the Fish_ \\
15 & _Mostly Harmless_ \\ \hline \hline \end{tabular}
\end{table}
Table 2: Example Document Repository \(\mathcal{D}\)
Figure 1: Example Index \(\mathcal{I}\) Storage (unencrypted)
Figure 2: Example Index \(\mathcal{I}\) Storage (encrypted)
\((1,0)\) which means that the entries for that keyword are to be found in \(A_{1}[0]\) (storage level 1, bucket 0). We represent location \(A_{i}[x]\) as a single numeric value \(i\,\|\,x\) (in this case the hex value 00010000). This is encoded in the hash table as \([i\,\|\,x]\oplus\mathsf{H}(\mathsf{F}(k_{2},w))\). Again, note that the search term \(w\) and a secret key are part of this encryption scheme. Supposing this gives the result 6BF86758, and continuing this for the rest of the table, we get the completely encrypted index shown in Figure 2. The values where the entries for our example term "dolphin" are encoded in \(\mathcal{I}\) are highlighted in red.
Now if we wish to search for a word like "dolphin", we generate a _search token_\(T=(t_{1},t_{2},t_{3})\) by providing the portion of the encoding operations requiring knowledge of the secret values, sending to the server only the output from \(\mathsf{F}\) which it can use to complete the hashing and decryption without divulging the actual keys or search terms: \(t_{1}=\mathsf{F}(k_{1},w)\), \(t_{2}=\mathsf{F}(k_{2},w)\), and \(t_{3}=\mathsf{F}(k_{3},w)\).
The server, upon receiving the client's search token \(T\), calculates \(\mathsf{H}(t_{1})\) and gets 38A9039C. Looking at the hash table in Figure 2 we see that this is a key stored there, associated with value 6BF86758. The server then calculates 6BF86758 \(\oplus\,\mathsf{H}(t_{2})\) to get the result 00010000. Although the server never knew the search term \(w\), it was given just enough information in \(T\) to determine that the answer to that query is to be found in storage location \(A_{1}[0]\). \(T\) does not provide any information to decode any other hash table entries since they were encoded using different values of \(w\).
Now the server knows that some of the values stored in \(A_{1}[0]\) can be decrypted using the key \(\mathsf{H}(t_{3})\). Running the contents of \(A_{1}[0]\) through this decryption, it gets the results 00030000, 1AED5898, EF00F293, 000C000, and 923BF508. Since any valid entry has \(0^{\lambda}\) bits appended, the server knows that only the first and fourth values were correctly decrypted by the key it was given, so the result reported to the client is the set of document IDs \(\{3,12\}\).
Note that when we set locality \(L>1\), we must allow for multiple buckets to hold the tuple lists for any given keyword, so the actual calculations for the hash table keys and values includes a counter \(c\in\{0,1,\ldots,L-1\}\). The key is actually encoded as \(\mathsf{H}(\mathsf{F}(k_{1},w)\|\,c)\) and the value as \([i\|x]\oplus\mathsf{H}(\mathsf{F}(k_{2},w)\|\,c)\).
## 4 Prior Work
In their seminal work on the subject, Song, Wagner, and Perrig [3] laid out the essential idea for SE indexing for the first time. From this beginning almost two decades ago, other researchers have further developed and extended these initial concepts in order to improve functionality, security, and performance.
One approach explored by Pinkas and Reinman [4] as an alternative to SE was to leverage the concept of oblivious ram (oram)--a specially-arranged memory system originally proposed by Goldreich and Ostrovsky [5] which has the property that "the sequence of memory accesses... reveals no information about the input..., beyond the running-time for the input." Pinkas and Reinman sought to use this aspect of oram to hide the nature of the calculations used to search through an encrypted index to thwart attempts at cryptanalysis or other means of obtaining confidential details of the index. Unfortunately, this approach is very expensive compared to more practical software-only solutions described here.
As these software SE systems were developed, they were primarily implemented as in-memory schemes. Cash, et al. [6] note that this approach did not scale effectively as the repository size expanded into the near-terabyte range and beyond. As index entries may be scattered throughout the index, the amount of data transmitted back to the user for local decryption multiplies with the database size. Cash and his co-authors proposed refinements which resulted in greater locality of the encrypted index entries, guaranteeing that entries matching a given search term cluster near each other in the index, thus reducing the number of encrypted index blocks which must be sent.
Cash and Tessaro [7] continued improving their previous SE schemes, working on maximizing data locality--the number of non-contiguous storage groups from which the server reads data to satisfy a given search query. They note--and go on to formally prove--how this optimization runs counter to the need to reduce the size of the index storage on the server.
Building further on that research, Asharov, et al. [8] created SE schemes with improved read efficiency (they report \(O(\log n)\) and \(O(\log\log n)\)) and demonstrated that it will always be the case that to achieve either maximal read efficiency and locality it will be necessary to sacrifice one to achieve the other.
Finally, Demertzis and Papamanthou [2] improved on these earlier efforts by developing a scheme which provides reasonable locality, including controls the repository owner may adjust to "tune" the storage to be maximally efficient for the type of data being indexed.
My research is directly based on the work of Demertzis and Papamanthou, whose scheme I extended to include multiple index collections and dynamic up
dates.
### Security of SE Systems
The observation above that SE systems will "leak" information over time from which an observer can infer confidential information raises the question of how much information leakage is acceptable. This issue has been explored at length by previous researchers. Song, Demertzis, and their colleagues who developed their respective SE implementation models (e.g., [3, 2]) provided with them formal proofs of the security of their encryption schemes. This was important to establish the trustworthiness of the SE concept in general.
Following on from this foundation, Naveed, Kamara, and Wright [9] along with Zhang, Katz, and Papamanthou [10] studied various attack scenarios and found that it was possible for a determined observer to eventually decrypt a significant amount of information out of an encrypted database stored on an untrusted server. These findings helped drive Demertzis and Papamanthou to develop more cryptographically robust encryption schemes which I also used for my work, and prompted me to seek a means of periodically invalidating accumulated inferences an observer may have gleaned as part of my update procedure.
### Locality Optimization
As noted above, early SE research posited in-memory solutions for the sake of theoretical exploration, but this presented a roadblock to adapting SE systems for real-world applications as it didn't allow the indexes to scale up to the data sizes needed in the real world. To address this, a number of storage strategies were proposed by Cash, et al., [6, 7] but these often ran into difficulties. For example, the practice of obfuscating the layout of the index by permuting index data throughout the storage area came at the expense of having related data clustered to make reads more efficient.
Demertzis and Papamanthou [2] proposed one improvement which I found advantageous enough to base my own work upon. Given some array \(A\) of storage locations on the server, this is organized into tiered _levels_\(A_{0},A_{1},\ldots,A_{\ell}\), where each level \(A_{i}\) consists of a number of _buckets_\(A_{i}[0],A_{i}[1],\ldots,A_{i}[q_{i}]\) in which we will store document IDs.
At each level, the bucket sizes increase exponentially. For example, one level would hold buckets containing 2 references, the next level would hold buckets of size 4, the next 8, the next 16, and so forth. The documents themselves are stored as encoded (\(w,\mathsf{id}(D)\)) tuples as described above on p. 4.
This arrangement nicely facilitates our need to populate the index with document IDs where the number of IDs matching any given keyword varies widely from the number matching another keyword, while allowing us to co-locate these tuples within \(L\) buckets for efficiency. By adjusting the value of \(L\) at index creation time, the SE administrator can reduce the locality of the tuple storage but gain the ability to split up searches into parallel tasks.
They also introduced the optimization parameter \(s\) which allows an index to be built with only a subset of levels actually used. Specifically, for \(s=\ell\), all levels are utilized, with each level \(i\) containing buckets sized to hold \(2^{i}\) tuples. If \(s\) is reduced to some value \(1\leq s\leq\ell\), however, the set of actual levels utilized will be
\[\mathcal{L}=\{\ell,\ell-p,\ldots,\ell-(s-1)p\}\]
and the tuples will be stored in the nearest actual level to the one it would have been assigned if all were allocated.
## 5 My Contributions
I focused my work in two specific areas: to create a working production-scale SE implementation based on Demertzis and Papamanthou's model, [2] and then to develop a system to add more information to the SE index over time. Their procedure for building a new SE index is summarized in Algorithm 1.
### Real-World Implementation
I investigated two avenues for implementing a remotely hosted SE indexing system. The first was to implement an indexing scheme that maintained its indexes in local files. This was done with some straightforward Python programs:
* genkeys generates a set of cryptographic keys \(K=(k_{1},k_{2},k_{3})\) for use by the client to encrypt and decrypt index information.
* buildindex reads a collection of documents, extracting a list of words from each. These words are then encoded into an encrypted index stored as a simple dbm database.
* search takes a search query from the user, looks that up in the SE index, and reports the matching list of document IDs to the user.
Since these operate on local files, all scripts are considered "client-side" in terms of having access to secret keys. In this model, I had in mind an implementation where another operating system layer--transparent to the SE code--handles remote hosting of the files. My choice for this was the InterPlanetary File System (ipfs), [11] which provides a distributed filesystem between clients, so each user sees a copy of the same underlying files, allowing a purely localized operation.
However, while that provides for simplicity of SE implementation, it comes at too high a cost for the widest audience since it requires substantial local data storage to be available on every client. It did, however, serve to demonstrate the correctness of the basic SE operations themselves before adding the extra complexity of network operations.
From there I switched to a traditional client-server model, defining a hard separation of duties between the data host (which may be remote and untrusted) and the local client. This is implemented in a new set of Python programs:
* fseserver runs on the server system to carry out requests on behalf of clients. This manages the dbm databases which comprise the SE index.
* buildindex_client works as buildindex does but rather than building local database files, it encrypts a new index and sends it to the server for remote storage.
* search_client takes a search query from the user, computes a search token \(T\) as described on p. 6, and passes that to the server. It relays the search results from the server back to the local user.
When designing the client-sever protocol for my implementation, one of my significant design goals was to allow large blocks of raw binary data since so much of the index is encrypted and needs to be sent as whole buckets at a time. This way I would not waste resources encoding and decoding the binary blocks as, e.g., base-64.
### Dynamic SE
The key intention of my work on SE systems was to find a practical implementation of _dynamic_ SE indexing. In today's world it seems quite likely that an SE indexing system would be gainfully employed to help users search through a fluid stream of communication, such as an email inbox or the conversation history of an official chat service such as those used for customer support by some companies.
Most of the work to this point has referenced the idea of building an index \(\mathcal{I}\) from a set of documents \(\mathcal{D}\) and set of search terms \(\Delta\) in a single indexing operation. Once \(\mathcal{I}\) is built, it is then searched any number of times but there is no notion of \(\mathcal{I}\) changing over time. Indeed, the way \(\mathcal{I}\) is constructed in the
first place depends on values such as \(N\) (the number of \((w,\mathsf{id}(D))\) tuples stored) and the distribution of words \(w\in\Delta\) throughout the data set and the number of documents \(\mathcal{D}(w)\) for each. If those values change, the internal arrangement of the whole index may be different.
This implies that updating \(\mathcal{I}\) over time is necessarily a matter of rebuilding it again from scratch. However, this is obviously untenable for large indexes (e.g., when adding a single email message to an existing index holding 10 million messages already).
Cash, et al. [6] discuss their own approach to dynamic SE schemes and the limitations they and others encountered. They note that prior schemes "had an impractically large index or leaked [information allowing the server to learn] the pattern of which keywords appear in which documents... which is a severe form of leakage." They go on to improve on that but make the assumption that full index rebuilds will be performed periodically and that deletions from the index will be rare. However, the approach they took does not lend itself to the tiered architecture I am working with. This prompted me to implement a new dynamic update system which is compatible with the tiered organization so I can retain the optimizations afforded by that structure.
Demertzis and Papamanthou [2] do discuss dynamic updates to SE systems, but only to a limited extent. While acknowledging the shortcomings of previous attempts, their proposal was sketched out in basic terms: "The main idea is that we organize \(n\) sequential updates to a collection of... independent encrypted indexes.... [For each \((w,\mathcal{D}(w))\) tuple mapping a search word to a document ID,] the data owner initializes a new SE scheme by creating a new SE index that contains only the specific tuple, [that] is subsequently uploaded to the untrusted server. Whenever two indexes of the same size \(t\) are detected there [sic] are downloaded by the data owner, decrypted and merged to form a sew SE index of size \(2t\), again with a fresh secret key. The new index is then used to replace the two indexes of size \(t\)."
To provide real-world practicality to the scheme, I chose to modify this to avoid unnecessarily creating many tiny indexes which will trigger many rapid successions of cascading merges with existing indexes. My design introduced the notion of an SE index _order_\(o=\lceil\log N\rceil\). Rather than storing a single index \(\mathcal{I}\), the server will maintain a collection of indexes. Let \(\mathcal{O}\) be the set of orders of indexes currently stored on a server. Then \(\mathcal{C}=\{\mathcal{I}_{o}\ \forall o\in\mathcal{O}\}\) is the collection of all SE indexes which may logically appear to the client as "the SE index."
When asked to perform a search, the server will perform the same operation on all indexes in \(\mathcal{C}\), returning the union of their results.
For this scheme to work, I further impose the restriction that no two indexes of the same order may exist at the same time in a collection (i.e., all elements of \(\mathcal{O}\) are unique). When new data are to be added to the SE index, a new index is built for the new data, resulting in a new index \(\mathcal{I}_{o}\). If no other index of order \(o\) exists in \(\mathcal{C}\), then \(\mathcal{I}_{o}\) is added to \(\mathcal{C}\). Otherwise, the contents of the existing \(\mathcal{I}_{o}\) are merged with the new data to create a new index \(\mathcal{I}_{p}\). Then \(\mathcal{I}_{p}\)_replaces_ the old \(\mathcal{I}_{o}\) in \(\mathcal{C}\). It may or may not be the case that \(o=p\). (If, at this point, there is an existing \(p\)-order index in \(\mathcal{C}\), then this process is repeated until all the cascading merges have resulted in an index of an order not currently on the server.)
By using this exponential progression in index sizes, I seek to minimize the amount of index data rebuilt at any given time. The larger-order indexes (which will have more data in them) are merged less frequently than the smaller, lower-order ones.
Implementing this feature required a compromise to be added to the original scheme proposed by Demertzis and Papamantou--I added an encrypted list \(\Delta\) of all indexed search terms. Only the client can decrypt this list, and it is never referenced during normal operations. It is only accessed and decrypted by the client during merge operations. Doing this is necessary because the SE index is set up to be useful only if a client already knows what search terms they're looking for, so there was no previous reason to store \(\Delta\) inside the index. Thus, no means were provided to reconstruct the original set \(\Delta\) of search terms that was used. Without that information, the rest of the index cannot be decoded back into the original set of tuples.
My updated index-building process (replacing Algorithm 1) is summarized in Algorithm 2.
```
1:\(\mathcal{I}_{o}\) = \(\mathcal{I}_{o}\) \(\forall o\in\
```
procedureIndexGen(\(k,\mathcal{D}\))\(\triangleright\)Modifies and extends original Setup(\(k,\mathcal{D}\)) Let \(\Delta\) be the list of all "interesting" keywords in document set \(\mathcal{D}\). \(N\leftarrow\sum_{\forall w\in\Delta}|\mathcal{D}(w)|\) \(\ell\leftarrow\lceil\log N\rceil\) \(\mathcal{L}\leftarrow\{\ell,\ell-p,\ldots,\ell-(s-1)p\}\). Let order \(o=\ell\). We will now build a new order-\(o\) index \(\mathcal{I}\). if\(L>1\)then \(\mathcal{L}\leftarrow\mathcal{L}\cup\{0\}\) endif if An order-\(o\) index already exists on the server then Retrieve and decrypt \(\Delta_{o}\) from server from existing order-\(o\) index \(\mathcal{I}_{o}\) Retrieve and decrypt all storage buckets holding actual data from existing index \(\mathcal{I}_{o}\) to \(\mathcal{D}_{o}\) \(\mathcal{D}^{\prime}\leftarrow\mathcal{D}\cup\mathcal{D}_{o}\) Delete old index \(\mathcal{I}_{o}\) returnIndexGen(\(k,\mathcal{D}^{\prime}\)) endif \(p\leftarrow\lceil\ell/s\rceil\) \(\forall i\in\mathcal{L}\) organize storage level \(A_{i}\), divided into buckets \(A_{i}[x]\). for each keyword \(w\in\Delta\) in random order do Find adjacent \(i,j\in\mathcal{L}:L2^{j}<|\mathcal{D}(w)|\leq L2^{i}\). Split \(\mathcal{D}(w)\) into a set of chunks \(C_{w}\). Set \(c=0\). for each chunk \(v\in C_{w}\)do \(c\gets c+1\). Let \(A\) be buckets in \(A_{i}\) able to hold chunk \(v\). Pick one bucket \(A_{i}[x]\) from \(A\) randomly; store \(v\) in it. Add \(H(F(k_{1},w)\mathbin{\|}c)\Rightarrow[i\mathbin{\|}x]\oplus H(F(k_{2},w) \mathbin{\|}c)\) to hash table. endfor endfor Encrypt \(\Delta\) and store in hash table in blocks of 100 words. Permute and encrypt entries in \(\mathcal{L}\); fill HT with random data. Upload \(\mathcal{I}\) to server. endprocedure
```
**Algorithm 2** Create or Add to a Dynamic Index
search operations. The remainder of the evaluation is focused on performance characteristics of the update function itself.
### Experimental Methodology
To evaluate the efficiency of my implementation, I set up test cases based on real-world data samples representative of highly dynamic document sets. I specifically chose email to approximate casual conversations which include both message text and metadata headers. Indexing archives of chat rooms, text messages, and other similar communication would be analogous.
My dataset was taken from the Enron email archive, [12] which is a collection of 517,401 actual email messages belonging to 150 users.3 Since this is a collection of actual communication between users, it provides a valuable test case to simulate how my SE implementation would fare in a real application. As noted by Klimt and Yang, "It can be used both as a large sample of real life email users and as a standard corpus for comparison of results using different methods." [13]
Footnote 3: I used a privacy-cleaned list which is slightly smaller than the original release of the dataset.
For the purposes of the evaluation, each email message is stored in an individual disk file (in Maildir format). The document ID (\(\mathsf{id}(D)\)) for each is the system's inode number for the message file. This provides a guaranteed unique ID for each file without the overhead of assigning custom IDs.
One experimental run of my dynamic SE implementation looked at the case of maintaining a comprehensive index of all messages. To simulate a realistic flow of new messages coming into the repository, I batched up the messages according o the original Date: header lines.
I ran three separate experiments, using batch sizes of 1, 7, and 30 days, to measure the performance of the implementation if it were to be re-indexing on a daily, weekly, or monthly basis respectively. Each of these operated on five subsets of the Enron data corpus, organized as shown in Table 3 by the recipient's initials. This is meant to simulate an arbitrary partitioning of a workforce into "departments" which may have slightly different input patterns. Figure 3 shows the day-by-day intake of \((w,\mathsf{id}(D))\) tuples for each of the departments. We see from this that although there are differences in activity day-to-day, the overall pattern of activity was similar, giving us five sample sets to compare and contrast. I observed consistent behavior among all five departments as I examined the server resource usage as each of their document indexes expanded over time.
With the input data thus broken into batches of various sizes, I ran each set of updates while measuring the following performance characteristics after each update operation:
* Size \(N\) of each index \(\mathcal{I}_{o}\) in terms of \(N\)
* Disk storage on the server for each \(\mathcal{I}_{o}\)
* Full set of words \(\Delta\) added in that update
* Contents of all storage locations in \(A_{i}\)
* Histogram of distribution of each keyword \(w\) within the input document set \(\mathcal{D}\)
* Wall-clock time in seconds taken to build the new index (including any needed merges with existing ones)
* Number of network transactions required to perform the update operation
* Number of bytes exchanged between client and server during the update operation
* Number of input documents added during that update
* If merging, how many words and tuples from previous indexes were downloaded to be merged into the new index
* Which orders of indexes were merged at each update
### Results
Using Department 1 as a representative example of the results obtained, we see in Figure 4 how the processing time varied as a function of the frequency of updates. At first glance, it is apparent that we can get an overall savings in processing work by doing updates less frequently. For example, in the close-up view in Figure 5 we see that over the same period of time the daily and weekly updates saw spikes of activity as they had to merge multiple indexes but the monthly updates (blue line) did not, since it had the advantage of performing the update operation with more data at a time locally in a single step rather than making more incremental updates which needed to be downloaded again to be merged later.
This prompted me to seek a predictor for these merge events, as a potential avenue to optimize the merge by anticipating when merges would be necessary. While there are some straightforward (if naive) indicators we could use, such as the number of incoming documents \(|\mathcal{D}|\) or the number of input words
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Department** & **Initial** & **People** & **Messages** & **Data (kB)** \\ \hline
1 & A–F & 33 & 119,477 & 667,632 \\
2 & G–K & 29 & 143,652 & 760,388 \\
3 & L–P & 32 & 103,354 & 504,668 \\
4 & Q–S & 38 & 105,930 & 526,084 \\
5 & T–Z & 18 & 44,988 & 225,600 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Arrangement of Users into Departments
Figure 3: Incoming Keyword Tuples For All Departments
Figure 4: Processing Time (Dept. 1) by Batch Size
Figure 5: Processing Time (Dept. 1) by Batch Size, Detail View
\(|\Delta|\) or even the number \(N\) of incoming tuples, none of these is a completely accurate predictor of either the time a merge will happen, or of the magnitude of each merge event.
The reason for this is that the conditions which trigger a merge event depend on the exact distribution of keywords (\(w\)) in each existing index \(\mathcal{I}_{o}\in\mathcal{C}\) as well as how the specific incoming data will alter that distribution. As we see in the data sample in Figure 6, a given input batch may have a few messages with a high diversity of input words (driving \(N\) significantly higher than \(|\mathcal{D}|\)) or vice versa.
If we want a loose correlation to track with the performance over time of the SE updates, \(N\) would still seem the best reasonable value that is easily at hand, which may be useful for long-range statistical analysis including the prediction of incoming message volume to the system overall, from which merge probability may be inferred.
### Issues
I discovered a pathological condition as an index hit certain sizes requiring large-scale cascading re-indexing operations. Most updates were completing in seconds, but these exceptional cases were taking many hours. In in some cases they took days to complete. For example, among the daily updates for Dept. 1 there were 3 out of the total 995 update batches (0.3% of the total) which took more than one day to complete. These took 1 day, 1 hour; 1 day, 14 hours; and 4 days, 15 hours.
Further investigation led to an initial diagnosis that this was likely caused by a combination of resource scarcity on the server and inefficiencies in the Python implementation which still need to be optimized further. For example, at some points an index collection included an order-24 index. This would contain a small number of buckets of size 33,554,432 words. For 64-bit words (as my implementation uses to store the \(i\|\,x\) encoded values), that's approximately a quarter-gigabyte string value to be assembled, copied, transmitted, received, buffered, copied, and decoded. In a runtime system such as Python's, that is not necessarily going to be handled as efficiently as it might be.
### Summary of Results
Overall, the results indicate that even this experimental Python implementation performed well enough to be of practical use. Updates finished in time for the index to be used for a reasonable time before the next update batch needed to start. The rare exceptions to this were due to the pathological case described in the previous section. Table 4 summarizes the runtime results of each of the experiments.
## 7 Future Work
Given the success of the experimental Python implementation, it makes sense to continue optimizing this design by coding it in a more efficient runtime system based on a language such as C or Go, as well as to continue looking for ways to optimize the server protocols and possibly the storage system itself. Specifically, the cause of the occasional very-long update operations should be investigated more.
This system also needs to be expanded to include the concept of deletion of messages from the index.
Finally, a formal evaluation of the cryptographic strength, including the likelihood and impact of potential information leakage over time when using this design compared to other SE schemes.
I did not examine the effects of changing the locality and storage variables \(L\) and \(s\) since that was already thoroughly treated by Demertzis and Papamanthou [2] in their proposal for this SE architecture initially. However, it would be interesting to come back to that once my dynamic changes to their design have matured and evolved into a fully workable model with deletion support, to see if then the effect of adjusting those variables is different from what was found previously.
## 8 Conclusions
I have expanded on the work of previous SE researchers to implement an experimental yet functional dynamic SE indexing service. With that in place, I have analyzed the runtime performance with the update batch size as the independent variable for my experiments. I concluded from those experiments that the scheme as described in this paper is practical for real-world applications. Further, I identified how more efficient updates (requiring less overall work) are achieved by delaying updates for longer periods of time (e.g., weekly or monthly rather than hourly or daily). However, this comes at the cost of not having that new data available for users of the index. It is necessary for a server administrator to determine what update frequency serves the needs of their users best.
This work contributes to the future implementation of real-world encrypted document indexing systems which can be employed to organize repositories
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{**Dept. 1**} & \multicolumn{3}{c}{**Dept. 2**} & \multicolumn{3}{c}{**Dept. 3**} & \multicolumn{3}{c}{**Dept. 4**} & \multicolumn{3}{c}{**Dept. 5**} \\
**Sample** & **Avg** & **SD** & **Avg** & **SD** & **Avg** & **SD** & **Avg** & **SD** & **Avg** & **SD** \\ \hline Daily & 21.14 & 240.75 & 4.75 & 47.87 & 11.07 & 111.57 & 12.49 & 119.33 & 0.99 & 11.20 \\ Weekly & 110.92 & 576.32 & 25.28 & 117.14 & 53.82 & 242.18 & 72.53 & 301.46 & — & — \\ Monthly & 322.31 & 855.13 & — & — & 197.31 & 504.38 & 254.61 & 583.68 & 96.26 & 341.74 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Summary of Experimental Results (batch wall-clock time in minutes)
Figure 6: Incoming Tuple Count vs. Message Count (Dept. 1)
of chat logs, emails, discussion forums, or other dynamic document collections while protecting the confidentiality of the content being served.
## 9 Acknowledgements
I wish to express gratitude to the guidance provided by Dr. Charles V. Wright, my Ph.D. advisor, who supervised this research and provided valuable feedback along the way. Also to Dr. David Maier, for his advice and instruction on writing styles for the earliest drafts of this paper.
| Searchable encrypted (SE) インデックスシステムは、クラウドサービスを利用して機密情報などを保存および管理するための有用なツールです。しかし、SEシステムの研究は、現状では主に理論的側面に集中しています。実用化するためには、最適なプロトコルと動作モデルの開発が更に必要です。これには、特に、動的なドキュメントセット(例: メールボックス)の暗号化されたインデックスを維持するための動作モデルの作成が含まれます。私は、これらの要件を満たす、実用的なエンドツーエンドのSE実装を作成しました。この実装には、動的なSE更新動作の最初の実証的なパフォーマンス評価が含まれており、この実装を通して、過去の研究者による理論的な概念から実用化可能な実装へと移行する道筋を示すことができ、さらに研究対象となる問題を特定しました。 |
2307.03010 | On the Upper Bound of Near Potential Differential Games | This letter presents an extended analysis and a novel upper bound of the
subclass of Linear Quadratic Near Potential Differential Games (LQ NPDG). LQ
NPDGs are a subclass of potential differential games, for which a distance
between an LQ exact potential differential game and the LQ NPDG. LQ NPDGs
exhibit a unique characteristic: the smaller the distance from an LQ exact
potential differential game, the closer their dynamic trajectories. This letter
introduces a novel upper bound for this distance. Moreover, a linear relation
between this distance and the resulting trajectory errors is established,
opening the possibility for further application of LQ NPDGs. | Balint Varga | 2023-07-06T14:19:48 | http://arxiv.org/abs/2307.03010v3 | # On the Upper Bound of Near Potential Differential Games
###### Abstract
This paper presents a novel analysis of the subclass of near potential differential games (NPDG). NPDGs are a subclass of potential differential games, for which a distance between an exact potential differential game and the NPDG. An appealing property of NPDGs is that, the smaller the distance between an exact potential differential game and the NPDG, the closer are their behaviors. In this paper, a novel upper estimation for this distance is given. Moreover, a relationship between this distance and the resulting trajectory errors is established, opening the possibility for further application of NPDGs.
keywords: Differential Games; Potential Games; Near Potential Differential Games; Upper Bound
## 1 Introduction
Game theory is a widely used mathematical tool to model interaction between multiple agents. In a _game_, different _players_ interact with each other in order to optimize their own _cost function_. Due to the interaction between them, the optimal solution has to be computed in a coupled manner. One of the solution concepts is the so-called _Nash-Equilibrium_ (NE), which provides a solution tp a _non-cooperative game_. Non-cooperative means that the players do not come to an agreement. Thus, the NE of an \(N\)-player game has to be computed by \(N\), coupled optimizations.
In the case of the so-called _potential games_, the game can be characterized by one single cost (potential) function instead of \(N\), coupled optimizations. Thus, the computation of the NE is obtained by optimizing this potential function. Furthermore, in the case of a convex and potential function, the NE of the game is also unique, which makes the usage of this characterization of games attractive for practical applications like motion planning [1], communication network management [2] or modeling human-robot interactions [3]
In this paper, a specific subclass, the _near potential differential games_ (NPDG) is discussed. In contrast to static games, NPDGs are suitable to model games with an underlying dynamic system, which are called _differential games_, which are commonly used to model engineering applications like cyber-physical systems [4], or human-machine interactions [5]. The contribution of this paper is the novel upper estimation of NPDGs: The proofs and the analysis are extended compared to [6].
## 2 Preliminaries
In the following, the focus of this paper lies on the linear quadratic (LQ) differential games. LQ differential games are useful for modeling a wide range of engineering problems since they provide a simple and effective way to trade off conflicting objectives and make optimal decisions across dynamic systems.
### Exact Potential Differential Games
**Definition 1** (LQ Differential Game [7]).: _An **LQ Differential Game**\(\Gamma_{\mathrm{d}}\) is defined as a tuple of_
* _a set of players_ \(i\in\mathcal{P}\)_,_
* _a dynamic system_ \[\dot{\mathbf{x}}(t)=\mathbf{A}\mathbf{x}(t)+\sum_{i\in\mathcal{P}}\mathbf{B}^{(i)}\bm {u}^{(i)}(t),\] (1)
* _the joint set of control strategies of the players_ \(\mathcal{U}=\mathcal{U}^{(1)}\times...\times\mathcal{U}^{(N)}\) _and_
* _the set of the players' cost functions_ \(\mathcal{J}=\{J^{(1)},\,...\,,\,J^{(N)}\}\)_, where_ \[J^{(i)}=\frac{1}{2}\int_{0}^{\tau_{\text{end}}}\boldsymbol{x}(t)^{\mathsf{T}} \mathbf{Q}^{(i)}\boldsymbol{x}(t)+\sum_{j\in\mathcal{P}}\boldsymbol{u}^{(j)}( t)^{\mathsf{T}}\mathbf{R}^{(ij)}\boldsymbol{u}^{(j)}(t)\;dt,\;i\in\mathcal{P},\] (2) _where_ \(\mathbf{Q}^{(i)}\) _and_ \(\mathbf{R}^{(ij)}\) _represent the penalty matrices for the system states and system inputs of the player i. The end of the game is_ \(\tau_{\text{end}}\)_. It is assumed that the matrices of the cost functions have a diagonal structure_ \(\mathbf{Q}^{(i)}=\operatorname{diag}\left(q_{1}^{(i)},q_{2}^{(i)},...,q_{n}^ {(i)}\right)\) _and_ \(\mathbf{R}^{(ij)}=\operatorname{diag}\left(r_{1}^{(ij)},r_{2}^{(ij)},...,r_{ p_{i}}^{(ij)}\right)\)_, are positive semi-definite and positive definite, respectively._
**Definition 2** (Nash Equilibrium [8]).: _The game is in a Nash equilibrium (NE) if the players cannot deviate from their actual strategies without increasing their costs_
\[J^{(i)}\left(\boldsymbol{u}^{(i)^{\ast}},\boldsymbol{u}^{(-)^{\ast}}\right) \leq J^{(i)}\left(\boldsymbol{u}^{(i)},\boldsymbol{u}^{(-)^{\ast}}\right)\; \;\forall i\in\mathcal{P}.\]
In order to compute the NE of a differential game, the so-called coupled Riccati equations are set up [9, Chapter 7], for which the Hamiltonians of the players are computed such as
\[H^{(i)}=\frac{1}{2}\boldsymbol{x}(t)^{\mathsf{T}}\mathbf{Q}^{(i)}\boldsymbol {x}(t)+\frac{1}{2}\sum_{j\in\mathcal{P}}\boldsymbol{u}^{(j)}(t)^{\mathsf{T}} \mathbf{R}^{(ij)}\boldsymbol{u}^{(j)}(t)+\lambda^{(i)T}(t)\dot{\boldsymbol{x }}(t). \tag{3}\]
For further details on the solution to the coupled Riccati equation, it is referred to [10, Chapter 3].
**Definition 3** (LQ Exact Potential Differential Games [11]).: _Let an LQ differential game \(\Gamma_{\text{ed}}\) with system dynamics (1) be given. Furthermore, let the quadratic cost functions (2) and Hamiltonian functions (3) of the players be given. Assume that the aggregated inputs of the players and the aggregated input matrices are defined such that_
\[\boldsymbol{u}^{(p)}(t)=\left[\boldsymbol{u}^{(1)^{\mathsf{T}}}(t),\, \boldsymbol{u}^{(2)^{\mathsf{T}}}(t),\,...\,\boldsymbol{u}^{(N)^{\mathsf{T}}}( t)\right]^{\mathsf{T}},\]
\[\mathbf{B}^{(p)}=\left[\mathbf{B}^{(1)},\mathbf{B}^{(2)},...,\mathbf{B}^{(N)} \right],\]
_respectively. Furthermore, consider an LQ optimal control problem over an infinite time horizon \(\tau_{\text{end}}\rightarrow\infty\) with the cost function_
\[J^{(p)}=\frac{1}{2}\int_{0}^{\tau_{\text{end}}}\boldsymbol{x}^{\mathsf{T}}(t) \mathbf{Q}^{(p)}\boldsymbol{x}(t)+\boldsymbol{u}^{(p)^{\mathsf{T}}}(t) \mathbf{R}^{(p)}\boldsymbol{u}^{(p)}(t)\mathrm{d}t \tag{4}\]
_as well as the Hamilton function_
\[H^{(p)}(t)=\frac{1}{2}\boldsymbol{x}(t)^{\mathsf{T}}\mathbf{Q}^{(p)} \boldsymbol{x}(t)+\frac{1}{2}\boldsymbol{u}^{(p)^{\mathsf{T}}}(t)\mathbf{R}^{ (p)}\boldsymbol{u}^{(p)}(t)+\lambda^{(p)T}\dot{\boldsymbol{x}}(t), \tag{5}\]
_where the matrices \(\mathbf{Q}^{(p)}\) and \(\mathbf{R}^{(p)}\) are positive semi-definite and positive definite, respectively. If_
\[\frac{\partial H^{(p)}(t)}{\partial\boldsymbol{u}^{(i)}(t)}=\frac{\partial H^ {(i)}(t)}{\partial\boldsymbol{u}^{(i)}(t)} \tag{6}\]
_holds for \(\forall i\in\mathcal{P}\), the LQ differential game \(\Gamma_{\text{de}}\) is an LQ **exact potential differential game**, which has the potential function \(J^{(p)}\)._
Definition 3 reveals that the NE can be computed by the optimal control problem of (1) and (4) in the case of an exact potential differential game as long (6) holds. For further discussions and examples, the reader is referred to [12] and [13].
### Near Potential Differential Games
The core idea of near potential games is the usage of a distance metric between two differential games. In that way, the required exactness of the exact potential differential games, cf. Definition 3 is transformed into a less restrictive condition, which permits a small, remaining difference between the two games. The concept of near potential static games is introduced in [14; 15]. Based on the intuitive idea that if two games are "close" in terms of the properties of the players' strategy sets, their properties in terms of NE should be somehow similar. A systematic framework for static games was developed in [14]. It was shown that a near potential static game has a similar convergence of the strategies1 compared to an exact potential static game. A similar convergence of the strategies means that similar changes in the input strategies lead to similar changes in the payoffs in the game. Furthermore, it is also shown that the meaning of "close" can be quantified in the developed framework, see [14]. Note that the novelty of this paper is the upper bound with a new proof compared to [6]. In the following, the extension of the concept of near potential static games to NPDGs is presented.
Footnote 1: Note that the convergence of static games means the convergence of the decision-making process, which leads to one of the NEs of the game. The term _dynamics_ has no relation to the dynamics of the system states in the context of differential games.
### Distance between two LQ Differential Games
Similar to the static case [15], a distance measure between two differential games is introduced.
**Definition 4** (Differential Distance).: _Let an exact potential differential game \(\Gamma_{\mathrm{ed}}^{(p)}\) with the potential function \(J^{(p)}\) be given. Furthermore, let an arbitrary LQ differential game \(\Gamma_{\mathrm{nd}}\) according to Definition 1 be given. The differential distance (DD) between \(\Gamma_{\mathrm{ed}}^{(p)}\) and \(\Gamma_{\mathrm{nd}}\) is defined as_
\[\sigma_{d}^{(i)}(t):=\left\|\frac{\partial H^{(p)}(t)}{\partial\mathbf{u}^{(i)}(t )}-\frac{\partial H^{(i)}(t)}{\partial\mathbf{u}^{(i)}(t)}\right\|_{2},\ i\in \mathcal{P}. \tag{7}\]
**Note 4:** Definition 4 defines vector space, in which two games can be compared and their "closeness" can be quantified.
**Definition 5** (Near Potential Differential Game).: _A differential game \(\Gamma_{\mathrm{nd}}\) is said to be an NPDG if the DD between \(\Gamma_{\mathrm{nd}}\) and an arbitrary exact potential differential game \(\Gamma_{\mathrm{ed}}^{(p)}\) is_
\[\max_{i}\left\|\sigma_{d}^{(i)}(t)\right\|_{2}<\Delta,\ \ i\in\mathcal{P}, \tag{8}\]
_where \(\Delta\geq 0\) is a small constant._
**Note 5.1:** Definition 5 does not exclude the subclass of exact potential differential games as \(\Delta=0\) is possible. Thus, exact potential differential games are a subset of NPDGs.
**Note 5.2:** The maximum DD is the measure of the likeness between the games. As the maximum DD increases, the dynamics of states and input trajectories of the NPDG are gradually getting larger. Thus, the main question is that for a given upper bound \(\Delta\), how large the perturbation of the state and inputs dynamics between \(\Gamma_{\mathrm{nd}}\) and \(\Gamma_{\mathrm{ed}}^{(p)}\) is admissible. Therefore, this perturbation is quantitatively characterized for LQ differential games in the following.
## 3 Upper Bound of NPDGs
The main results of this paper are presented in this section: The novel upper bound of the DD and a further analysis of the boundness of an NPDG.
### Properties of an NPDG
**Theorem 1** (Lq Npdg).: _Let an LQ exact potential differential game \(\Gamma_{\text{ed}}^{(p)}\) with its state trajectories \(\mathbf{x}^{(p)}(t)\) in its NE be given. Furthermore, let an arbitrary LQ differential game \(\Gamma_{\text{nd}}\) according to Definition 1 with its state trajectories \(\mathbf{x}^{*}(t)\) in the NE of \(\Gamma_{\text{nd}}\) be given. If_
\[\max_{i}\left\|\mathbf{B}^{(i)}\mathbf{P}^{(p)}-\mathbf{B}^{(i)}\mathbf{P}^{( i)}\right\|_{2}<\Delta^{*} \tag{9}\]
_holds, then \(\Gamma_{\text{nd}}\) is an LQ NPDG in accordance with Definition 5._
Proof.: The derivative of \(H^{(i)}\) is expressed as
\[\frac{\partial H^{(i)}(t)}{\partial\mathbf{u}^{(i)}(t)}=\mathbf{R}^{(i)}\mathbf{u}^{( i)}(t)+\mathbf{B}^{(i)T}\mathbf{\lambda}^{(0)}(t), \tag{10}\]
which holds for \(i\in\mathcal{P}\). Since the optimal control law of the players, (10) is zero, a small perturbation around the optimal solution is sought. Based on [3], the derivatives of the Hamiltonian of player \(i\) can be rewritten as
\[\frac{\partial H^{(i)}(t)}{\partial\mathbf{u}^{(i)}(t)}=-\varepsilon_{c}^{(i)}( \mathbf{x})\mathbf{B}^{(i)T}\mathbf{P}^{(i)}\mathbf{x}^{*}(t), \tag{11}\]
and for the derivatives of the Hamiltonian of the potential function
\[\frac{\partial H^{(p)}(t)}{\partial\mathbf{u}^{(i)}(t)}=-\varepsilon_{c}^{(p)}( \mathbf{x})\mathbf{B}^{(i)}\mathbf{T}^{(p)}\mathbf{x}^{(p)}(t) \tag{12}\]
are obtained, where \(\varepsilon_{c}^{(p)}(\mathbf{x})\ll 1\) and \(\varepsilon_{c}^{(i)}(\mathbf{x})\ll 1\) are scalar perturbation functions. Substituting the derivatives into (7), the DD is stated as
\[\sigma_{d}^{(i)}(t)=\left\|\varepsilon_{c}^{(p)}(\mathbf{x})\mathbf{B}^{(i)} \mathbf{T}^{(p)}\mathbf{x}^{(p)}(t)-\varepsilon_{c}^{(i)}(\mathbf{x})\mathbf{B}^{(i)T} \mathbf{P}^{(i)}\mathbf{x}^{*}(t)\right\|_{2}.\]
Introducing an upper bound of the variation \(\varepsilon_{c}:=\max\left(\varepsilon_{c}^{(p)}(\mathbf{x}),\varepsilon_{c}^{(i)} (\mathbf{x})\right)\), the DD is rewritten as
\[\sigma_{d}^{(i)}(t) =\left\|\varepsilon_{c}\mathbf{B}^{(i)}\mathbf{T}^{(p)}\mathbf{x}^{( p)}(t)-\varepsilon_{c}\mathbf{B}^{(i)T}\mathbf{P}^{(i)}\mathbf{x}^{*}(t)\right\|_{2} \tag{13}\] \[\leq|\varepsilon_{c}|\left\|\mathbf{B}^{(i)}\mathbf{T}^{(p)}\mathbf{x }^{(p)}(t)-\mathbf{B}^{(i)T}\mathbf{P}^{(i)}\mathbf{x}^{*}(t)\right\|_{2} \tag{14}\]
In the following, it is assumed that there is a \(\Delta\mathbf{x}^{(p)}(t)\leq 0\) such
\[\mathbf{x}^{(p)}(t) =\mathbf{x}^{*}(t)+\Delta\mathbf{x}^{(p)}(t)\text{ or } \tag{15}\] \[\mathbf{x}^{(p)}(t) =\mathbf{x}^{*}(t)-\Delta\mathbf{x}^{(p)}(t) \tag{16}\]
hold \(\forall t\in[0,\tau_{\text{end}}]\). On one hand, if (15) holds, the upper bound of \(\sigma_{d}^{(i)}(t)\) is rewritten to
\[\sigma_{d}^{(i)}(t) =|\varepsilon_{c}|\left\|\mathbf{B}^{(i)}\mathbf{T}^{(p)}\mathbf{x}^{ (p)}(t)-\mathbf{B}^{(i)T}\mathbf{P}^{(i)}\mathbf{x}^{(p)}(t)+\mathbf{B}^{(i)T} \mathbf{P}^{(i)}\Delta\mathbf{x}^{(p)}(t)\right\|_{2}\] \[\leq|\varepsilon_{c}|\left\|\left(\mathbf{B}^{(i)}\mathbf{T}^{(p) }-\mathbf{B}^{(i)T}\mathbf{P}^{(i)}\right)\mathbf{x}^{(p)}(t)\right\|_{2}+\underbrace {|\varepsilon_{c}|\left\|\mathbf{B}^{(i)T}\mathbf{P}^{(i)}\Delta\mathbf{x}^{(p)} (t)\right\|_{2}}_{\text{$0\text{ as $c$:}\Delta\mathbf{x}^{(p)}\ll 1$ and $\mathbf{x}^{*}\cdot\Delta\mathbf{x}^{(p)}\to 0$}} \tag{17}\] \[\leq|\varepsilon_{c}|\left\|\mathbf{B}^{(i)}\mathbf{T}^{(p)}- \mathbf{B}^{(i)T}\mathbf{P}^{(i)}\right\|_{2}\left\|\mathbf{x}^{(p)}(t)\right\|_{2} \text{ }i\in\mathcal{P}. \tag{18}\]
On the other hand, if (15) holds, the upper bound of \(\sigma_{d}^{(i)}(t)\) is
\[\sigma_{d}^{(i)}(t)\leq\left\|\varepsilon_{c}\right\|\left\|{\bf B}^{(i)}{}^{ \mathsf{T}}{\bf P}^{(i)}-{\bf B}^{(i)T}{\bf P}^{(p)}\right\|_{2}\left\|{\bf x}^ {*}(t)\right\|_{2}\ i\in\mathcal{P}. \tag{19}\]
Introducing the notation for the maximum magnitude of the state vectors \(x_{\max}:=\max\left(\left\|{\bf x}^{*}(t)\right\|_{2},\,\left\|{\bf x}^{(p)}(t )\right\|_{2}\right),\) the estimations (13) and (19) can be combined into
\[\sigma_{d}^{(i)}(t)\leq\left|\varepsilon_{c}\right|\left\|{\bf B}^{(i)}{}^{ \mathsf{T}}{\bf P}^{(p)}-{\bf B}^{(i)T}{\bf P}^{(i)}\right\|_{2}\,x_{\max}\ i\in \mathcal{P}.\]
Introducing \(\Delta^{*}=\frac{\Delta}{\left|\varepsilon_{c}\right|\cdot x_{\max}}\) leads to the upper bound of \(\sigma_{d}^{(i)},\)
\[\max_{i}\left\|{\bf B}^{(i)}{}^{\mathsf{T}}{\bf P}^{(p)}-{{\bf B}^{(i)}}^{ \mathsf{T}}{\bf P}^{(i)}\right\|_{2}<\Delta^{*}\]
proving that \(\Gamma_{\mathrm{nd}}\) is an NPDG with an upper bound of \(\Delta^{*}.\)
If the upper bound of DD \(\boldsymbol{\sigma}_{d}\) between the NPDG and the exact potential differential games is sufficiently _small_, closed-loop characteristics with similar results can be drawn. In the case of differential games system state trajectories are analyzed2. The terms _small_ and _similar_ are described more precisely in the next subsection.
Footnote 2: In the static case, the decision procedure to find the NE is the focus of the analysis. For a given distance between two static games, an approximate NE with an \(\epsilon\) limit is obtained, which is called the \(\epsilon\)-NE of the game. For more information on the near potential static game and the concept of \(\epsilon\)-Nash Equilibrium, it is referred to [15] or [16, Chapter 19].
### Dynamics of LQ NPDGs
The dynamics of the system and input trajectories are analyzed in order to provide an estimation of the differences between two LQ differential games. Let it be assumed for the LQ differential game \(\Gamma_{\mathrm{nd}}\) that the control laws of the players \(i\in\mathcal{P}\) are obtained from the solution to the coupled Riccati equations over an infinite time horizon, which leads to the closed-loop system dynamics
\[\dot{\boldsymbol{x}}(t)={\bf A}_{c}^{*}\boldsymbol{x}(t),\ \ \boldsymbol{x}(t_{0})= \boldsymbol{x}_{0},\ \text{where}\ {\bf A}_{c}^{*}={\bf A}-\sum_{i\in\mathcal{P}}{\bf B}^{(i)}{\bf R}^{(i)}{}^{ -1}{\bf B}^{(i)}{}^{\mathsf{T}}{\bf P}^{(i)} \tag{20}\]
and that the unique solution to (20) is
\[\boldsymbol{x}^{*}(t)=e^{{\bf A}_{c}^{*}t}\boldsymbol{x}_{0}. \tag{21}\]
For the LQ exact potential differential games \(\Gamma_{\mathrm{ed}}^{(p)}\), the control law \({\bf K}^{(p)}={\bf R}^{(p)-1}{\bf B}^{(p)}{}^{\mathsf{T}}{\bf P}^{(p)}\) is obtained from the optimization of the potential function (4), which is used to compute the feedback system dynamics
\[\dot{\boldsymbol{x}}^{(p)}(t)={\bf A}_{c}^{(p)}\boldsymbol{x}^{(p)}(t),\ \ \boldsymbol{x}^{(p)}(t_{0})= \boldsymbol{x}_{0}^{(p)},\ \text{where}\ {\bf A}_{c}^{(p)}={\bf A}-{\bf B}^{(p)}{\bf R}^{(p)-1}{\bf B}^{(p)}{}^{ \mathsf{T}}{\bf P}^{(p)}. \tag{22}\]
The solution to (22) is
\[\boldsymbol{x}^{(p)}(t)=e^{{\bf A}_{c}^{(p)}t}\boldsymbol{x}_{0}^{(p)}. \tag{23}\]
From the state trajectories \(\boldsymbol{x}^{(p)}(t)\) and \(\boldsymbol{x}^{*}(t)\), an upper bound (\(\eta\)) of the errors is provided for a given \(\Delta\) between two games. For this, a notion of the difference between two closed-loop system behaviors is introduced in Definition 6.
**Definition 6** (Closed-Loop System Matrix Error).: _Consider an LQ exact potential differential game \(\Gamma_{\mathrm{ed}}^{(p)}\) with the system trajectories (23). Furthermore, assume that an arbitrary LQ differential game \(\Gamma_{\mathrm{nd}}\) is an NPDG with the system trajectories (21). Then, the **closed-loop system matrix error** between \(\Gamma_{\mathrm{ed}}^{(p)}\) and \(\Gamma_{\mathrm{nd}}\) is defined as_
\[\Delta{\bf K}:={\bf A}_{c}^{*}-{\bf A}_{c}^{(p)}. \tag{24}\]
**Note 6:** Two differential games are _similar_, if the closed-loop system matrix error is small and consequently, the system trajectories of these two games \(\mathbf{x}^{*}(t)\) and \(\mathbf{x}^{(p)}(t)\) are _close_ to each other. In this case, \(\Gamma_{\text{nd}}\) is an NPDG. This closeness between an NPDG and an LQ exact potential differential game is quantified in Lemma 2.
**Theorem 2** (Boundedness of NPDGs).: _Let an LQ NPDG \(\Gamma_{\text{nd}}\) and an exact potential differential game \(\Gamma_{\text{ed}}^{(p)}\) be given. Let the system state trajectories of the two games \(\Gamma_{\text{ed}}^{(p)}\) and \(\Gamma_{\text{nd}}\) be \(\mathbf{x}^{(p)}(t)\) and \(\mathbf{x}^{*}(t)\), respectively. Moreover,_
\[\mathbf{x}^{(p)}(t_{0})=\mathbf{x}^{*}(t_{0})=\mathbf{x}_{0} \tag{25}\]
_hold for the initial values. Then, the error between the system state trajectories of \(\Gamma_{\text{nd}}\) and \(\Gamma_{\text{ed}}^{(p)}\) are bounded by the function \(\eta(\Delta)\) over an arbitrary time interval \([t_{0},t_{1}]\), such that_
\[\left\|\mathbf{x}^{(p)}(t)-\mathbf{x}^{*}(t)\right\|_{2}\leq\eta(\Delta),\ \ \forall t\in[t_{0},t_{1}]. \tag{26}\]
Proof.: From the solution to the differential equations (20) and (22),
\[\left\|\mathbf{x}^{*}(t)-\mathbf{x}^{(p)}(t)\right\|_{2}=\left\|e^{\mathbf{x}^{*}_{i}t} \mathbf{x}_{0}-e^{\mathbf{x}^{(p)}_{i}t}\mathbf{x}_{0}\right\|_{2}\]
is obtained. As (25) holds, using Definition 6 and [17, Theorem 11.16.7] leads to
\[\left\|\mathbf{x}^{*}(t)-\mathbf{x}^{(p)}(t)\right\|_{2}\leq\left\|\Delta\mathbf{K} \cdot t\right\|_{2}e^{\max\left\{\left\|\mathbf{x}^{(p)}_{i},\mathbf{\|}_{2},\|\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\bm
Due to the well-known scaling ambiguity, there is a manifold of the potential functions (4) that result in an identical feedback gain matrix, thus a scaling factor \(\kappa^{p}>0\in\,\mathbb{R}\) can be chosen such that \(\bar{J}^{(p)}=\kappa^{p}\cdot J^{(p)}\) and \(\left\|\mathbf{R}^{(p)}\right\|_{2}>1\) holds. Assuming a suitable scaling, (29) leads to
\[\left\|\Delta\mathbf{K}\right\|_{2}\leq\left\|\mathbf{B}^{(p)}\right\|_{2} \left\|\mathbf{B}^{(p)^{\mathsf{T}}}\mathbf{P}^{(p)}-\sum_{i\in\mathcal{P}} \mathbf{R}_{i}^{(p)}\mathbf{P}_{\sum\mathcal{P}}^{(i)}\right\|_{2}.\]
Then, let the following matrix be introduced
\[\tilde{\mathbf{F}}=\begin{bmatrix}\mathbf{B}^{(1)^{\mathsf{T}}} \mathbf{P}^{(p)}-\mathbf{R}_{1}^{(p)}\mathbf{P}_{\sum\mathcal{P}}^{(1)}\\ \vdots\\ \mathbf{B}^{(i)^{\mathsf{T}}}\mathbf{P}^{(p)}-\mathbf{R}_{i}^{(p)}\mathbf{P}_ {\sum\mathcal{P}}^{(i)}\\ \vdots\\ \mathbf{B}^{(N)^{\mathsf{T}}}\mathbf{P}^{(p)}-\mathbf{R}_{N}^{(p)}\mathbf{P}_ {\sum\mathcal{P}}^{(N)}\end{bmatrix}=\mathbf{B}^{(p)^{\mathsf{T}}}\mathbf{P}^ {(p)}-\sum_{i\in\mathcal{P}}\mathbf{R}_{i}^{(p)}\mathbf{P}_{\sum\mathcal{P}}^ {(i)}. \tag{34}\]
The so-called Frobenius norm is defined as the entry-wise Euclidean norm of a matrix (see [18]), for which
\[\left\|\tilde{\mathbf{F}}\right\|_{2}\leq\left\|\tilde{\mathbf{F}}\right\|_ {F} \tag{35}\]
holds (see [19, Chapter 5] or [17, Section 9.8.12]). Applying the definition of the Frobenius norm to (34),
\[\left\|\tilde{\mathbf{F}}\right\|_{F}=N\cdot\max_{i}\left(\left\|\mathbf{B}^{ (i)^{\mathsf{T}}}\mathbf{P}^{(p)}-\mathbf{R}_{i}^{(p)}\mathbf{P}_{\sum \mathcal{P}}^{(i)}\right\|_{2}\right),\;i\in\mathcal{P} \tag{36}\]
is obtained. Using property (35) and (36) leads to an upper bound
\[\left\|\Delta\mathbf{K}\right\|_{2}\leq \left\|\mathbf{B}^{(i)}\right\|_{2}\left\|\left[\mathbf{B}^{(p)^ {\mathsf{T}}}\mathbf{P}^{(p)}-\sum_{i\in\mathcal{P}}\mathbf{R}_{i}^{(p)} \mathbf{P}_{\sum\mathcal{P}}^{(i)}\right\|_{2}\right. \tag{37}\] \[\leq \left\|\mathbf{B}^{(p)}\right\|_{2}N\cdot\max_{i}\left\|\mathbf{ B}^{(i)^{\mathsf{T}}}\mathbf{P}^{(p)}-\mathbf{R}_{i}^{(p)}\mathbf{P}_{\sum \mathcal{P}}^{(i)}\right\|_{2}. \tag{38}\]
Due to the scaling ambiguity, \(\bar{J}^{(i)}=\kappa^{i}\cdot J^{(i)}\), \(\kappa^{i}>0\in\,\mathbb{R}\) holds and \(\kappa^{i}\) and \(\kappa^{p}\) can be modified to obtain \(\mathbf{R}^{(i)}\) and \(\mathbf{R}^{(p)}\), such that
\[\left\|\mathbf{B}^{(i)^{\mathsf{T}}}\mathbf{P}^{(p)}-\mathbf{R}_{i}^{(p)} \mathbf{R}^{(i)^{-1}}\mathbf{B}^{(i)^{\mathsf{T}}}\mathbf{P}^{(i)}\right\|_{2 }\leq\left\|\mathbf{B}^{(i)^{\mathsf{T}}}\mathbf{P}^{(p)}-\mathbf{B}^{(i)^{ \mathsf{T}}}\mathbf{P}^{(i)}\right\|_{2}\]
holds, for which
\[\left\|\mathbf{R}^{(p)}\mathbf{R}^{(i)^{-1}}\mathbf{B}^{(i)^{\mathsf{T}}} \mathbf{P}^{(i)}\right\|_{2}\geq\left\|\mathbf{B}^{(i)^{\mathsf{T}}}\mathbf{P} ^{(i)}\right\|_{2} \tag{39}\]
is sufficient (see [17, Section 9.9.42]). This leads to
\[\left\|\Delta\mathbf{K}\right\|_{2}\leq\left\|\mathbf{B}^{(p)}\right\|_{2}N \cdot\max_{i}\left\|\mathbf{B}^{(i)^{\mathsf{T}}}\mathbf{P}^{(p)}-\mathbf{B}^{( i)^{\mathsf{T}}}\mathbf{P}^{(i)}\right\|_{2}=\left\|\mathbf{B}^{(p)}\right\|_{2}N \cdot\Delta. \tag{40}\]
The substitution of the upper bound of \(\Delta\mathbf{K}\) in (27) by (40) leads to an upper bound for the trajectory error
\[\eta(\Delta)=\left\|\mathbf{B}^{(p)}\right\|_{2}N\cdot\Delta\cdot t\cdot e^{ \max\left(\left\|\mathbf{A}_{i}^{(p)^{\mathsf{T}}}\right\|_{2}\left\|\mathbf{A} _{i}^{(p)^{\mathsf{T}}}\right\|_{2}\left\|\mathbf{A}_{i}^{(p)^{\mathsf{T}}} \right\|_{2}\right)}\left\|\mathbf{x}_{0}\right\|_{2}, \tag{41}\]
which implies
\[\left\|\mathbf{x}^{(p)}(t)-\mathbf{x}^{*}(t)\right\|_{2}\leq\eta(\Delta) \tag{42}\]
and completes the proof.
**Remark 1:**
From (42), it can be seen that the upper bound of the DD governs the maximal admissible error between the trajectories, where the function \(\eta(\Delta)\) depends only on the initial value, the system structure and the time interval \([t_{0},t_{1}]\).
**Remark 2:**
In (41), \(C_{\eta,\text{NPDG}}(t)\) is bounded in the time interval \([t_{0},t_{1}]\). Thus, Lemma 2 holds \(\forall t\in[t_{0},t_{1}]\) only. However, \(\Delta\) can be defined as
\[\Delta:=\begin{cases}\Delta_{1}&\forall t\in[t_{0},t_{1}]\\ \Delta_{2}&\forall t\in[t_{1},t_{2}]\\ \vdots\\ \Delta_{N}&\forall t\in[t_{N-1},t_{N}]\\ \vdots\end{cases}\]
In case of asymptotically stable system state trajectories \(\mathbf{x}^{(p)}(t)\) and \(\mathbf{x}^{*}(t)\), a monotonic decreasing series, \(\Delta_{N-1}\leq\Delta_{N}\), can be assumed to prevent \(C_{\eta,\text{NPDG}}(t)\) from an exponential growth for \(t\to\infty\). Consequently, Lemma 2 also holds for \(t\to\infty\).
**Remark 3:**
Note that Lemma 2 differs from the estimation of the distance between solutions of two general initial value problems of differential equations: The upper bound between two general initial value problems is given as a function of the Lipschitz constant and is usually proved with the Gronwall-Bellman inequality, see e.g. (20, Theorem 3.4.). On the other hand, Lemma 2 provides the link between the upper bound \(\eta(\Delta)\) and the DD of the two games \(\Delta\), which differs from general initial value problems. Thus, Lemma 2 is a special case of Theorem 3.4. [20].
## 4 Discussion
Potential games provide a more compact representation of strategic games, making them practical for engineering applications. Moreover, the computation of the NE is simpler since a single optimization problem has to be solved instead of the coupled Riccati equations. The further advantages of the proposed NPDGs are that the strictness of exact potential differential games is softened, extending the applicability of the concept of potential games.
Illustrative engineering examples include human-human or robot-human interactions, for which NPDGs are suitable models. Such interactions are modeled by differential games in literature [21; 22] and studies have demonstrated that the resulting motions of human-human or robot-human interactions can be characterized by NE of this differential game [23; 24]. Nevertheless, the assumption of NE can be violated due to the _bounded rationality_ of humans in some cases (cf. [25; 26]). In case of such violations, the proposed upper bound of the DD is a helpful tool to quantify the deviation from the NE. Thus, the concept can be used to analyze and design human-machine interactions.
## 5 Summary and Outlook
In this paper, the analyses of the novel subclass of near potential differential games are provided. It has been shown that the upper bound of the DD between an NPDG and an exact potential differential game has a Moreover, it has been shown that the resulting trajectory error is governed by the defined upper bound of the DD, which enables prediction for the maximal trajectory error between an NPDG and an exact potential differential game. In the future, the proposed NPDG will be applied to model human-machine interactions.
## 6 Acknowledgment
This work was supported by the Federal Ministry for Economic Affairs and Climate Action, in the New Vehicle and System Technologies research initiative with Project number 19A21008D. | この手紙は、線形二次近似潜在力差分ゲーム (LQ NPDG) のサブクラスの拡張分析と革新的な上限を提供します。LQ NPDG は潜在力差分ゲームのサブクラスであり、LQ 確実な潜在力差分ゲームとLQ NPDG の間の距離。LQ NPDG はユニークな特性を示します。つまり、LQ 確実な潜在力差分ゲームからの距離が小さいほど、その動的軌道が近くなります。この手紙では、この距離の新しい上限を導入します。さらに、この距離と結果の軌道の誤差との線形関係が確立され、LQ NPDG のさらなる応用の可能性が開かれています。
**Please note:**
* Use formal and clear language.
* Maintain the meaning and tone of the original sentence.
* Avoid unnecessary wording. |
2306.03339 | An inequality related to the sieve of Eratosthenes | Let $\Phi(x,y)$ denote the number of integers $n\in[1,x]$ free of prime
factors $\le y$. We show that but for a few small cases, $\Phi(x,y)<.6x/\log y$
when $y\le\sqrt{x}$. | Steve Fan, Carl Pomerance | 2023-06-06T01:21:00 | http://arxiv.org/abs/2306.03339v3 | # An inequality related to
###### Abstract.
Let \(\Phi(x,y)\) denote the number of integers \(n\in[1,x]\) free of prime factors \(\leq y\). We show that but for a few small cases, \(\Phi(x,y)<.6x/\log y\) when \(y\leq\sqrt{x}\).
Key words and phrases:Buchstab's function, Selberg's sieve 2010 Mathematics Subject Classification: 11N25
## 1. Introduction
The sieve of Eratosthenes removes the multiples of the primes \(p\leq y\) from the set of positive integers \(n\leq x\). Let \(\Phi(x,y)\) denote the number of integers remaining. Answering a question of Ford, the first-named author [7] recently proved the following theorem.
**Theorem A**. _When \(2\leq y\leq x\), \(\Phi(x,y)<x/\log y\)._
If \(y>\sqrt{x}\), then \(\Phi(x,y)=\pi(x)-\pi(y)+1\) (where \(\pi(t)\) is the number of primes in \([1,t]\)), and so by the prime number theorem, Theorem A is essentially best possible when \(x^{1-\epsilon}<y<\epsilon x\). When \(y\leq\sqrt{x}\), there is a long history in estimating \(\Phi(x,y)\), and in particular, we have the following theorem, essentially due to Buchstab (see [13, Theorem III.6.4]).
**Theorem B**. _For \(\omega(u)\) the Buchstab function and \(u=\log x/\log y\geq 2\) and \(y\geq 2\),_
\[\Phi(x,y)=\frac{x}{\log y}\left(\omega(u)+O\left(\frac{1}{\log y}\right)\right).\]
The Buchstab function \(\omega(u)\) is defined as the unique continuous function on \([1,\infty)\) such that
\[u\omega(u)=1\text{ on }[1,2],\quad(u\omega(u))^{\prime}=\omega(u-1)\text{ on }(2,\infty).\]
Below is a graph of \(\omega(u)\) for \(u\in[1,8]\) generated by Mathematica.
It is known that \(\lim_{u\to\infty}\omega(u)=e^{-\gamma}=0.561459483566885\dots\) and that \(\omega(u)\) oscillates above and below its limiting value infinitely often. The minimum value of \(\omega(u)\) on \([2,\infty)\) is \(1/2\) at \(u=2\) and the maximum value \(M_{0}\) is \(0.567143290409783\dots\), occurring at \(u=2.76322283417162\dots\). In particular, it follows from Theorem B that if \(c>M_{0}\) and \(y\leq\sqrt{x}\) with \(y\) sufficiently large depending on the choice of \(c\), that \(\Phi(x,y)<cx/\log y\). In addition, using an inclusion-exclusion argument plus the fact that the Mertens product \(\prod_{p\leq y}(1-1/p)<M_{0}/\log y\) for all \(y\geq 2\), the inequality \(\Phi(x,y)<cx/\log y\) can be extended to all \(2\leq y\leq\sqrt{x}\), but now with \(x\) sufficiently large depending on \(c\).
In light of Theorem A and given that \(\Phi(x,y)\) is a fundamental (and ancient) function, it seems interesting to try and make these consequences of Theorem B numerically explicit. We prove the following theorem.
**Theorem 1**.: _For \(3\leq y\leq\sqrt{x}\), we have \(\Phi(x,y)<.6x/\log y\). The same inequality holds when \(2\leq y\leq\sqrt{x}\) and \(x\geq 10\)._
To prove this we use some numerically explicit estimates of primes due to Rosser-Schoenfeld, Buthe, and others. In addition we use a numerically explicit version of the upper bound in Selberg's sieve.
Theorem B itself is also appealing. It provides a simple asymptotic formula for \(\Phi(x,y)\) as \(y\to\infty\) which is applicable in a wide range.
Writing
\[\Phi(x,y)=\frac{x}{\log y}\left(\omega(u)+\frac{\Delta(x,y)}{\log y}\right),\]
one may attempt to establish numerically explicit lower and upper bounds for \(\Delta(x,y)\) in the range \(y\leq\sqrt{x}\) for suitably large \(y\geq y_{0}\), where \(y_{0}\geq 2\) is some numerically computable constant. More precisely, de Bruijn [3] essentially showed that for any given \(\epsilon>0\), one has
\[\Phi(x,y)=\mu_{y}(u)e^{\gamma}x\log y\prod_{p\leq y}\left(1-\frac{1}{p}\right) +O(x\exp(-(\log y)^{3/5-\epsilon}))\]
for all \(x\geq y\geq 2\), where
\[\mu_{y}(u):=\int_{0}^{u-1}\omega(u-v)y^{-v}\,dv.\]
Recently, the first-named author [8] proved numerically explicit versions of this result applicable for \(y\) in wide ranges.
## 2. A prime lemma
Let \(\pi(x)\) denote, as usual, the number of primes \(p\leq x\). Let
\[\mathrm{li}(x)=\int_{0}^{x}\frac{dt}{\log t},\]
where the principal value is taken for the singularity at \(t=1\). There is a long history in trying to find the first point when \(\pi(x)\geq\mathrm{li}(x)\), which we now know is beyond \(10^{19}\). We prove a lemma based on this research.
**Lemma 1**.: _Let \(\beta_{0}=2.3\times 10^{-8}\). For \(x\geq 2\), we have \(\pi(x)<(1+\beta_{0})\mathrm{li}(x)\)._
Proof.: The result is true for \(x\leq 10\), so assume \(x\geq 10\). Consider the Chebyshev function
\[\theta(x)=\sum_{p\leq x}\log p.\]
We use [10, Prop. 2.1], which depends strongly on extensive calculations of Buthe [4, 5] and Platt [11]. This result asserts in part that \(\theta(x)\leq x-.05\sqrt{x}\) for \(1427\leq x\leq 10^{19}\) and for larger \(x\), \(\theta(x)<(1+\beta_{0})x\). One easily checks that \(\theta(x)<x\) for \(x<1427\), so we have
\[\theta(x)<(1+\beta_{0})x,\quad x>0.\]
By partial summation, we have
\[\pi(x) =\frac{\theta(x)}{\log x}+\int_{2}^{x}\frac{\theta(t)}{t(\log t)^{2} }\,dt\] \[<\frac{(1+\beta_{0})x}{\log x}+\int_{2}^{10}\frac{\theta(t)}{t( \log t)^{2}}\,dt+(1+\beta_{0})\int_{10}^{x}\frac{dt}{(\log t)^{2}}.\]
Since \(\int dt/(\log t)^{2}=-t/\log t+\mathrm{li}(t)\), we have
\[\pi(x) <(1+\beta_{0})\mathrm{li}(x)+\int_{2}^{10}\frac{\theta(t)}{t(\log t )^{2}}\,dt+(1+\beta_{0})(10/\log 10-\mathrm{li}(10)) \tag{1}\] \[<(1+\beta_{0})\mathrm{li}(x)-.144.\]
This gives the lemma.
After checking for \(x\leq 10\), we remark that an immediate corollary of (1) is the inequality
\[\pi(x)-k<(1+\beta_{0})(\mathrm{li}(x)-k),\quad 2\leq k\leq\pi(x),\ k\leq 10^{7}. \tag{2}\]
## 3. Inclusion-exclusion
For small values of \(y\geq 2\) we can do a complete inclusion-exclusion to compute \(\Phi(x,y)\). Let \(P(y)\) denote the product of the primes \(p\leq y\). We have
\[\Phi(x,y)=\sum_{d|P(y)}\mu(d)\left\lfloor\frac{x}{d}\right\rfloor. \tag{3}\]
As a consequence, we have
\[\Phi(x,y)\leq\sum_{d|P(y)}\mu(d)\frac{x}{d}+\sum_{\begin{subarray}{c}d|P(y) \\ \mu(d)=1\end{subarray}}1=x\prod_{p\leq y}\left(1-\frac{1}{p}\right)+2^{\pi(y)-1}. \tag{4}\]
We illustrate how this elementary inequality can be used in the case when \(\pi(y)=5\), that is, \(11\leq y<13\). Then the product in (4) is \(16/77<.207793\). The remainder term in (4) is \(16\). And we have
\[\Phi(x,y)<.207793x+16<.6x/\log 13\]
when \(x\geq 613\). There remains the problem of dealing with smaller values of \(x\), which we address momentarily. We apply this method for \(y<71\).
factor \(\leq y\), taking the complement of this set in the set of all integers up to the \(x\) bound, and then computing \((j\log p)/n\) where \(n\) is the \(j\)th member of the set and \(p\) is the upper bound of the \(y\) interval. The max of these numbers is recorded as the max statistic.
As one can see, for \(y\geq 3\) the max statistic in Table 1 is below.6. However, for the interval \([2,3)\) it is above.6. One can compute that it is \(<.6\) once \(x\geq 10\).
This method can be extended to larger values of \(y\), but the \(x\) bound becomes prohibitively large. With a goal of keeping the \(x\) bound smaller than \(3\times 10^{7}\), we can extend a version of inclusion-exclusion to \(y<241\) as follows.
First, we "pre-sieve" with the primes 2, 3, and 5. For any \(x\geq 0\) the number of integers \(n\leq x\) with \(\gcd(n,30)=1\) is \((4/15)x+r\), where \(|r|\leq 14/15\), as can be easily verified by looking at values of \(x\in[0,30]\). We change the definition of \(P(y)\) to be the product of the primes in \((5,y]\). Then for \(y\geq 5\), we have
\[\Phi(x,y)\leq\frac{4}{15}\sum_{d|P(y)}\mu(d)\frac{x}{d}+\frac{14}{15}2^{\pi(y) -3}.\]
\begin{table}
\begin{tabular}{|l|l|l|} \hline \(y\) interval & \(x\) bound & max \\ \hline \([2,3)\) & 22 &.61035 \\ \([3,5)\) & 51 &.57940 \\ \([5,7)\) & 96 &.55598 \\ \([7,11)\) & 370 &.56634 \\ \([11,13)\) & 613 &.55424 \\ \([13,17)\) & 1603 &.56085 \\ \([17,19)\) & 2753 &.54854 \\ \([19,23)\) & 6296 &.55124 \\ \([23,29)\) & 17539 &.55806 \\ \([29,31)\) & 30519 &.55253 \\ \([31,37)\) & 76932 &.55707 \\ \([37,41)\) & \(1.6\times 10^{5}\) &.55955 \\ \([41,43)\) & \(2.9\times 10^{5}\) &.55648 \\ \([43,47)\) & \(5.9\times 10^{5}\) &.55369 \\ \([47,53)\) & \(1.4\times 10^{6}\) &.55972 \\ \([53,59)\) & \(3.0\times 10^{6}\) &.55650 \\ \([59,61)\) & \(5.4\times 10^{6}\) &.55743 \\ \([61,67)\) & \(1.2\times 10^{7}\) &.55685 \\ \([67,71)\) & \(2.4\times 10^{7}\) &.55641 \\ \hline \end{tabular}
\end{table}
Table 1. Small \(y\).
However, it is better to use the Bonferroni inequalities in the form
\[\Phi(x,y)\leq\frac{4}{15}\sum_{j\leq 4}\sum_{\begin{subarray}{c}d\mid P(y)\\ \nu(d)=j\end{subarray}}(-1)^{j}\frac{x}{d}+\sum_{i=0}^{4}\binom{\pi(y)-3}{i}=xs( y)+b(y),\]
say, where \(\nu(d)\) is the number of distinct prime factors of \(d\). (We remark that the expression \(b(y)\) could be replaced with \(\frac{14}{15}b(y)\).) The inner sums in \(s(y)\) can be computed easily using Newton's identities, and we see that
\[\Phi(x,y)\leq.6x/\log y\text{ for }x>b(y)/(.6/\log y-s(y)).\]
We have verified that this \(x\) bound is smaller than 30,000,000 for \(y<241\) and we have verified that \(\Phi(x,y)<.6x/\log y\) for \(x\) up to this bound and \(y<241\).
This completes the proof of Theorem 1 for \(y<241\).
## 4. When \(u\) is large: Selberg's sieve
In this section we prove Theorem 1 in the case that \(u=\log x/\log y\geq 7.5\) and \(y\geq 241\). Our principal tool is a numerically explicit form of Selberg's sieve.
Let \(\mathcal{A}\) be a set of positive integers \(a\leq x\) and with \(|\mathcal{A}|\approx X\). Let \(\mathcal{P}=\mathcal{P}(y)\) be a set of primes \(p\leq y\). For each \(p\in\mathcal{P}\) we have a collection of \(\alpha(p)\) residue classes mod \(p\), where \(\alpha(p)<p\). Let \(P=P(y)\) denote the product of the members of \(\mathcal{P}\). Let \(g\) be the multiplicative function defined for numbers \(d\mid P\) where \(g(p)=\alpha(p)/p\) when \(p\in\mathcal{P}\). We let
\[V:=\prod_{p\in\mathcal{P}}(1-g(p))=\prod_{p\in\mathcal{P}}\left(1-\frac{\alpha (p)}{p}\right).\]
We define \(r_{d}(\mathcal{A})\) via the equation
\[\sum_{\begin{subarray}{c}a\in\mathcal{A}\\ d\mid a\end{subarray}}1=g(d)X+r_{d}(\mathcal{A}).\]
The thought is that \(r_{d}(\mathcal{A})\) should be small. We are interested in \(S(\mathcal{A},\mathcal{P})\), the number of those \(a\in\mathcal{A}\) such that \(a\) is coprime to \(P\).
We will use Selberg's sieve as given in [9, Theorem 7.1]. This involves an auxiliary parameter \(D<X\) which can be freely chosen. Let \(h\) be the multiplicative function supported on divisors of \(P\) such that \(h(p)=g(p)/(1-g(p))\). In particular if each \(\alpha(p)=1\), then each \(g(p)=1/p\) and \(h(p)=1/(p-1)\), so \(h(d)=1/\varphi(d)\) for \(d\mid P\), where
\(\varphi\) is Euler's function. Henceforth we will make this assumption (that each \(\alpha(p)=1\)). Let
\[J=J_{D}=\sum_{\begin{subarray}{c}d|P\\ d<\sqrt{D}\end{subarray}}h(d),\quad R=R_{D}=\sum_{\begin{subarray}{c}d|P\\ d<D\end{subarray}}\tau_{3}(d)|r_{d}(\mathcal{A})|,\]
where \(\tau_{3}(d)\) is the number of ordered factorizations \(d=abc\), where \(a,b,c\) are positive integers. Selberg's sieve gives in this situation that
\[S(\mathcal{A},\mathcal{P})\leq X/J+R. \tag{5}\]
Note that if \(D\geq P^{2}\), then
\[J=\sum_{d|P}h(d)=\prod_{p\in\mathcal{P}}(1+h(p))=\prod_{p\in\mathcal{P}}(1-g(p ))^{-1}=V^{-1},\]
so that \(X/J=XV\). This is terrific, but if \(D\) is so large, the remainder term \(R\) in (5) is also large, making the estimate useless. So, the trick is to choose \(D\) judiciously so that \(R\) is under control with \(J\) being near to \(V^{-1}\).
Consider the case when each \(|r_{d}(\mathcal{A})|\leq r\), for a constant \(r\). In this situation the following lemma is useful.
**Lemma 2**.: _For \(y\geq 241\), we have_
\[R\leq r\sum_{\begin{subarray}{c}d<D\\ d|P(y)\end{subarray}}\tau_{3}(d)\leq rD(\log y)^{2}\prod_{\begin{subarray}{c}p \leq y\\ p\notin\mathcal{P}\end{subarray}}\left(1+\frac{2}{p}\right)^{-1}.\]
Proof.: Let \(\tau(d)=\tau_{2}(d)\) denote the number of positive divisors of \(d\). Note that
\[\sum_{d|P(y)}\frac{\tau(d)}{d}=\prod_{p\in\mathcal{P}}\left(1+\frac{2}{p} \right)=\prod_{p\leq y}\left(1+\frac{2}{p}\right)\prod_{\begin{subarray}{c}p \leq y\\ p\notin\mathcal{P}\end{subarray}}\left(1+\frac{2}{p}\right)^{-1}.\]
One can show that for \(y\geq 241\) the first product on the right is smaller than \(.95(\log y)^{2}\), but we will only use the "cleaner" bound \((\log y)^{2}\) (which holds when \(y\geq 53\)). Thus,
\[\sum_{\begin{subarray}{c}d<D\\ d|P(y)\end{subarray}}\tau_{3}(d) =\sum_{\begin{subarray}{c}d<D\\ d|P(y)\end{subarray}}\sum_{j|d}\tau(j)\leq\sum_{\begin{subarray}{c}j<D\\ j|P(y)\end{subarray}}\tau(j)\sum_{\begin{subarray}{c}d<D/j\\ d|P(y)\end{subarray}}1\] \[<D\sum_{\begin{subarray}{c}j<D\\ j|P(y)\end{subarray}}\frac{\tau(j)}{j}<D(\log y)^{2}\prod_{\begin{subarray}{c}p \leq y\\ p\notin\mathcal{P}\end{subarray}}\left(1+\frac{2}{p}\right)^{-1}.\]
This completes the proof.
To get a lower bound for \(J\) in (5) we proceed as in [9, Section 7.4]. Recall that we are assuming each \(\alpha(p)=1\) and so \(h(d)=1/\varphi(d)\) for \(d\mid P\).
Let
\[I=\sum_{\begin{subarray}{c}d\geq\sqrt{D}\\ d\mid P\end{subarray}}\frac{1}{\varphi(d)},\]
so that \(I+J=V^{-1}\). Hence
\[J=V^{-1}-I=V^{-1}(1-IV), \tag{6}\]
so we want an upper bound for \(IV\). Let \(\varepsilon\) be arbitrary with \(\varepsilon>0\). We have
\[I<D^{-\varepsilon}\sum_{d\mid P}\frac{d^{2\varepsilon}}{\varphi(d)}=D^{- \varepsilon}\prod_{p\leq y}\left(1+\frac{p^{2\varepsilon}}{p-1}\right),\]
and so, assuming each \(\alpha(p)=1\),
\[IV<D^{-\varepsilon}\prod_{p\in\mathcal{P}}\left(1+\frac{p^{2\varepsilon}-1}{p} \right)=:f(D,\mathcal{P},\varepsilon). \tag{7}\]
In particular, if \(y\geq 241\) and each \(r_{d}(\mathcal{A})\leq r\), then
\[S(\mathcal{A},\mathcal{P})\leq XV\big{(}1-f(D,\mathcal{P},\varepsilon)\big{)} ^{-1}+rD(\log y)^{2}\prod_{\begin{subarray}{c}p\leq y\\ p\not\in\mathcal{P}\end{subarray}}\left(1+\frac{2}{p}\right)^{-1}. \tag{8}\]
We shall choose \(D\) so that the remainder term is small in comparison to \(XV\), and once \(D\) is chosen, we shall choose \(\varepsilon\) so as to minimize \(f(D,\mathcal{P},\varepsilon)\).
**4.1. The case when \(y\leq 500\),\(000\) and \(u\geq 7.5\).**
We wish to apply (8) to estimate \(\Phi(x,y)\) when \(u\geq 7.5\), that is, when \(x\geq y^{7.5}\). We have a few choices for \(\mathcal{A}\) and \(\mathcal{P}\). The most natural choice is that \(\mathcal{A}\) is the set of all integers \(\leq x\), \(X=x\), and \(\mathcal{P}\) is the set of all primes \(\leq y\). In this case, each \(r_{d}(\mathcal{A})\leq 1\), so that we can take \(r=1\) in (8) (since \(r_{d}(\mathcal{A})\geq 0\) in this case). Instead we choose (as in the last section) \(\mathcal{A}\) as the set of all integers \(\leq x\) that are coprime to \(30\) and we choose \(\mathcal{P}\) as the set of primes \(p\) with \(7\leq p\leq y\). Then \(X=4x/15\) and one can check that each \(|r_{d}(\mathcal{A})|\leq 14/15\), so we can take \(r=14/15\) in (8). Also,
\[\prod_{\begin{subarray}{c}p\leq y\\ p\not\in\mathcal{P}\end{subarray}}\left(1+\frac{2}{p}\right)^{-1}=\frac{3}{14},\]
when \(y\geq 5\). With this choice of \(\mathcal{A}\) and \(\mathcal{P}\), (8) becomes
\[\Phi(x,y)\leq XV\left(1-D^{-\varepsilon}\prod_{7\leq p\leq y}\left(1+\frac{p^{2 \varepsilon}-1}{p}\right)\right)^{-1}+\frac{1}{5}D(\log y)^{2}, \tag{9}\]
when \(y\geq 241\).
Our "target" for \(\Phi(x,y)\) is \(.6x/\log y\). We choose \(D\) here so that our estimate for the remainder term is \(1\%\) of the target, namely \(.006x/\log y\). Thus, in light of Lemma 2, we choose
\[D=.03x/(\log y)^{3}.\]
We have verified that for every value of \(y\leq 500{,}000\) and \(x\geq y^{7.5}\) that the right side of (9) is smaller than \(.6x/\log y\). Note that to verify this, if \(p,q\) are consecutive primes with \(241\leq p<q\), then \(S(\mathcal{A},\mathcal{P})\) is constant for \(p\leq y<q\), and so it suffices to show the right side of (9) is smaller than \(.6x/\log q\). Further, it suffices to take \(x=p^{7.5}\), since as \(x\) increases beyond this point with \(\mathcal{P}\) and \(\varepsilon\) fixed, the expression \(f(D,\mathcal{P},\varepsilon)\) decreases. For smaller values of \(y\) in the range, we used Mathematica to choose the optimal choice of \(\varepsilon\). For larger values, we let \(\varepsilon\) be a judicious constant over a long interval. As an example, we chose \(\varepsilon=.085\) in the top half of the range.
**4.2. When \(y\geq 500{,}000\) and \(u\geq 7.5\).**
As in the discussion above we have a few choices to make, namely for the quantities \(D\) and \(\varepsilon\). First, we choose \(x=y^{7.5}\), since the case \(x\geq y^{7.5}\) follows from the proof of the case of equality. We choose \(D\) as before, namely \(.03x/(\log y)^{3}\). We also choose
\[\varepsilon=1/\log y.\]
Our goal is to prove a small upper bound for \(f(D,\mathcal{P},\varepsilon)\) given in (7). We have
\[f(D,\mathcal{P},\varepsilon)<D^{-\varepsilon}\exp\left(\sum_{7\leq p\leq y} \frac{p^{2\varepsilon}-1}{p}\right).\]
We treat the two sums separately. First, by Rosser-Schoenfeld [12, Theorems 9, 20], one can show that
\[-\sum_{p\leq y}\frac{1}{p}<-\log\log y-.26\]
for all \(y\geq 2\), so that
\[-\sum_{7\leq p\leq y}\frac{1}{p}<-\log\log y-.26+31/30 \tag{10}\]
for \(y\geq 7\). For the second sum we have
\[\sum_{7\leq p\leq y}p^{2\varepsilon-1}=7^{2\varepsilon-1}+(\pi(y)-4)y^{2 \varepsilon-1}+\int_{11}^{y}(1-2\varepsilon)(\pi(t)-4)t^{2\varepsilon-2}\,dt.\]
At this point we use (2), so that
\[\frac{1}{1+\beta_{0}} \sum_{11\leq p\leq y}p^{2\varepsilon-1}<(\mathrm{li}(y)-4)y^{2 \varepsilon-1}+\int_{11}^{y}(1-2\varepsilon)(\mathrm{li}(t)-4)t^{2\varepsilon- 2}\,dt\] \[=(\mathrm{li}(y)-4)y^{2\varepsilon-1}-(\mathrm{li}(t)-4)t^{2 \varepsilon-1}\Big{|}_{11}^{y}+\int_{11}^{y}\frac{t^{2\varepsilon-1}}{\log t}\,dt\] \[=(\mathrm{li}(11)-4)11^{2\varepsilon-1}+\mathrm{li}(t^{2 \varepsilon})\Big{|}_{11}^{y}\] \[=(\mathrm{li}(11)-4)11^{2\varepsilon-1}+\mathrm{li}(y^{2 \varepsilon})-\mathrm{li}(11^{2\varepsilon}),\]
and so
\[\frac{1}{1+\beta_{0}}\sum_{7\leq p\leq y}p^{2\varepsilon-1}<7^{2 \varepsilon-1}+(\mathrm{li}(11)-4)11^{2\varepsilon-1}+\mathrm{li}(y^{2 \varepsilon})-\mathrm{li}(11^{2\varepsilon}). \tag{11}\]
There are a few things to notice, but we will not need them. For example, \(\mathrm{li}(y^{2\varepsilon})=\mathrm{li}(e^{2})\) and \(\mathrm{li}(11^{2\varepsilon})\approx\log(11^{2\varepsilon}-1)+\gamma\).
Let \(S(y)\) be the sum of the right side of (10) and \(1+\beta_{0}\) times the right side of (11). Then
\[f(D,\mathcal{P},\varepsilon)<D^{-\varepsilon}e^{S(y)}.\]
The expression \(XV\) in (9) is
\[x\prod_{p\leq y}\left(1-\frac{1}{p}\right).\]
We know from [10] that this product is \(<e^{-\gamma}/\log y\) for \(y\leq 2\times 10^{9}\), and for larger values of \(y\), it follows from [6, Theorem 5.9] (which proof follows from [6, Theorem 4.2] or [2, Corollary 11.2]) that it is \(<(1+2.1\times 10^{-5})e^{-\gamma}/\log y\). We have
\[\Phi(x,y) \leq XV\big{(}1-f(D,\mathcal{P},\varepsilon)\big{)}^{-1}+\frac{1} {5}D(\log y)^{2}\] \[<(1+2.1\times 10^{-5})\frac{x}{e^{\gamma}\log y}\big{(}1-D^{- \varepsilon}e^{S(y)}\big{)}^{-1}+\frac{.006x}{\log y}. \tag{12}\]
We have verified that \((1-D^{-\varepsilon}e^{S(y)})^{-1}\) is decreasing in \(y\), and that at \(y=500\),000 it is smaller than \(1.057\). Thus, (12) implies that
\[\Phi(x,y)<(1+2.1\times 10^{-5})\frac{1.057x}{e^{\gamma}\log y}+\frac{.006x}{ \log y}<\frac{.5995x}{\log y}.\]
This concludes the case of \(u\geq 7.5\).
## 5. Small \(u\)
In this section we prove that \(\Phi(x,y)<.57163x/\log y\) when \(u\in[2,3)\), that is, when \(y^{2}\leq x<y^{3}\).
For small values of \(y\), we calculate the maximum of \(\Phi(x,y)/(x/\log y)\) for \(y^{2}\leq x<y^{3}\) directly, as we did in Section 3 when we checked below the \(x\) bounds in Table 1 and the bound \(3\times 10^{7}\). We have done this for \(241\leq y\leq 1100\), and in this range we have
\[\Phi(x,y)<.56404\frac{x}{\log y},\quad y^{2}\leq x<y^{3},\quad 241\leq y\leq 1100.\]
Suppose now that \(y>1100\) and \(y^{2}\leq x<y^{3}\). We have
\[\Phi(x,y)=\pi(x)-\pi(y)+1+\sum_{y<p\leq x^{1/2}}(\pi(x/p)-\pi(p)+1). \tag{13}\]
Indeed, if \(n\) is counted by \(\Phi(x,y)\), then \(n\) has at most \(2\) prime factors (counted with multiplicity), so \(n=1\), \(n\) is a prime in \((y,x]\) or \(n=pq\), where \(p,q\) are primes with \(y<p\leq q\leq x/p\).
Let \(p_{j}\) denote the \(j\)th prime. Note that
\[\sum_{p\leq t}\pi(p)=\sum_{j\leq\pi(t)}j=\frac{1}{2}\pi(t)^{2}+\frac{1}{2}\pi( t).\]
Thus,
\[\sum_{y<p\leq x^{1/2}}(\pi(p)-1)=\frac{1}{2}\pi(x^{1/2})^{2}-\frac{1}{2}\pi(x^ {1/2})-\frac{1}{2}\pi(y)^{2}+\frac{1}{2}\pi(y),\]
and so
\[\Phi(x,y)=\pi(x)-M(x,y)+\sum_{y<p\leq x^{1/2}}\pi(x/p), \tag{14}\]
where
\[M(x,y)=\frac{1}{2}\pi(x^{1/2})^{2}-\frac{1}{2}\pi(x^{1/2})-\frac{1}{2}\pi(y)^ {2}+\frac{3}{2}\pi(y)-1.\]
We use Lemma 1 on various terms in (14). In particular, we have (assuming \(y\geq 5\))
\[\Phi(x,y)<(1+\beta_{0})\text{li}(x)+\sum_{y<p\leq x^{1/2}}(1+\beta_{0})\text{ li}(x/p)-M(x,y). \tag{15}\]
Via partial summation, we have
\[\begin{split}\sum_{y<p\leq x^{1/2}}\mathrm{li}(x/p)=& \ x^{1/2}\mathrm{li}(x^{1/2})\sum_{y<p\leq x^{1/2}}\frac{1}{p}\\ &-\int_{y}^{x^{1/2}}\left(\mathrm{li}(x/t)-\frac{x/t}{\log(x/t)} \right)\sum_{y<p\leq t}\frac{1}{p}\,dt.\end{split} \tag{16}\]
For \(1100\leq t\leq 10^{4}\) we have checked numerically that
\[0<\sum_{p\leq t}\frac{1}{p}-\log\log t-B<.00624,\]
where \(B=.261497\dots\) is the Meissel-Mertens constant. Further, for \(10^{4}\leq t\leq 10^{6}\),
\[0<\sum_{p\leq t}\frac{1}{p}-\log\log t-B<.00161.\]
(The lower bounds here follow as well from [12, Theorem 20].) It thus follows for \(1100\leq y\leq 10^{4}\) that
\[\sum_{y<p\leq x^{1/2}}\frac{1}{p}<\log\frac{\log(x^{1/2})}{\log y}+\beta_{1},\quad\sum_{y<p\leq t}\frac{1}{p}>\log\frac{\log t}{\log y}-\beta_{1}, \tag{17}\]
where \(\beta_{1}=.00624\). Now suppose that \(y\geq 10^{4}\). Using [6, Eq. (5.7)] and the value \(4.4916\) for "\(\eta_{3}\)" from [2, Table 15], we have that
\[\Big{|}\sum_{p\leq t}\frac{1}{p}-\log\log t-B\Big{|}<1.9036/(\log t)^{3},\ t \geq 10^{6}.\]
Thus, (17) continues to hold for \(y\geq 10^{4}\) with \(.00624\) improved to \(.00322\). We thus have from (16)
\[\begin{split}\sum_{y<p\leq x^{1/2}}\mathrm{li}(x/p)<& \ x^{1/2}\mathrm{li}(x^{1/2})\left(\log\frac{\log(x^{1/2})}{\log y }+\beta_{1}\right)\\ &-\int_{y}^{x^{1/2}}\left(\mathrm{li}(x/t)-\frac{x/t}{\log(x/t)} \right)\left(\log\frac{\log t}{\log y}-\beta_{1}\right)\,dt.\end{split} \tag{18}\]
Let \(R(t)=(1+\beta_{0})\mathrm{li}(t)/(t/\log t)\), so that \(R(t)\to 1+\beta_{0}\) as \(t\to\infty\). We write the first term on the right side of (15) as
\[\frac{x}{u\log y}R(x)=\frac{R(y^{u})}{u}\frac{x}{\log y},\]
and note that the first term on the right of (18) is less than
\[R(y^{u/2})\frac{2}{u}(\log(u/2)+\beta_{1})\frac{x}{\log y}.\]
For the expression \(\frac{1}{2}\pi(x^{1/2})^{2}-\frac{1}{2}\pi(x^{1/2})\) in \(M(x,y)\) we use the inequality \(\pi(t)>t/\log t+t/(\log t)^{2}\) when \(t\geq 599\), which follows from [1, Lemma 3.4] and a calculation (also see [6, Corollary 5.2]). Further, we use \(\pi(y)\leq R(y)y/\log y\) for the rest of \(M(x,y)\).
Using these estimates and numerical integration for the integral in (18) we find that
\[\Phi(x,y)<.57163\frac{x}{\log y},\quad y\geq 1100,\quad y^{2}\leq x<y^{3}.\]
## 6. Iteration
Suppose \(k\) is a positive integer and we have shown that
\[\Phi(x,y)\leq c_{k}\frac{x}{\log y} \tag{19}\]
for all \(y\geq 241\) and \(u=\log x/\log y\in[2,k)\). We can try to find some \(c_{k+1}\) not much larger than \(c_{k}\) such that
\[\Phi(x,y)\leq c_{k+1}\frac{x}{\log y}\]
for \(y\geq 241\) and \(u<k+1\). We start with \(c_{3}\), which by the results of the previous section we can take as.57163. In this section we attempt to find \(c_{k}\) for \(k\leq 8\) such that \(c_{8}<.6\). It would then follow from Section 4 that \(\Phi(x,y)<.6x/\log y\) for all \(u\geq 2\) and \(y\geq 241\).
Suppose that (19) holds and that \(y\) is such that \(x^{1/(k+1)}<y\leq x^{1/k}\). We have
\[\Phi(x,y)=\Phi(x,x^{1/k})+\sum_{y<p\leq x^{1/k}}\Phi(x/p,p^{-}). \tag{20}\]
Indeed the sum counts all \(n\leq x\) with least prime factor \(p\in(y,x^{1/k}]\), and \(\Phi(x,x^{1/k})\) counts all \(n\leq x\) with least prime factor \(>x^{1/k}\). As we have seen, it suffices to deal with the case when \(y=q_{0}^{-}\) for some prime \(q_{0}\).
Note that if (19) holds, then it also holds for \(y=x^{1/k}\). Indeed, if \(y\) is a prime, then \(\Phi(x,y)=\Phi(x,y+\epsilon)\) for all \(0<\epsilon<1\), and in this case \(\Phi(x,y)\leq c_{k}x/\log(y+\epsilon)\), by hypothesis. Letting \(\epsilon\to 0\) shows we have \(\Phi(x,y)\leq c_{k}x/\log y\) as well. If \(y\) is not prime, then for all sufficiently small \(\epsilon>0\), we again have \(\Phi(x,y)=\Phi(x,y+\epsilon)\) and the same proof works.
Thus, we have (19) holding for all of the terms on the right side of (20). This implies that
\[\Phi(x,q_{0}^{-})\leq c_{k}x\Bigg{(}\frac{1}{\log(x^{1/k})}+\sum_{q_{0}\leq p\leq x ^{1/k}}\frac{1}{p\log p}\Bigg{)}. \tag{21}\]
We expect that the parenthetical expression here is about the same as \(1/\log q_{0}\), so let us try to quantify this. Let
\[\epsilon_{k}(q_{0})=\max\Bigg{\{}\frac{-1}{\log q_{0}}+\frac{1}{\log(x^{1/k})} +\sum_{q_{0}\leq p\leq x^{1/k}}\frac{1}{p\log p}:y^{k}<x\leq y^{k+1}\Bigg{\}}.\]
Let \(q_{1}\) be the largest prime \(\leq x^{1/k}\), so that
\[\epsilon_{k}(q_{0})=\max\Bigg{\{}\frac{-1}{\log q_{0}}+\frac{1}{\log q_{1}}+ \sum_{q_{0}\leq p\leq q_{1}}\frac{1}{p\log p}:q_{0}<q_{1}\leq q_{0}^{1+1/k} \Bigg{\}}.\]
It follows from (21) that
\[\Phi(x,y)=\Phi(x,q_{0}^{-})\leq c_{k}x\left(\frac{1}{\log q_{0}}+\epsilon_{k} (q_{0})\right)=\frac{c_{k}x}{\log y}(1+\epsilon_{k}(q_{0})\log q_{0}).\]
Note that as \(k\) grows, \(\epsilon_{k}(q_{0})\) is non-increasing since the max is over a smaller set of primes \(q_{1}\). Thus, we have the inequality
\[\Phi(x,q_{0}^{-})\leq c_{3}(1+\epsilon_{3}(q_{0})\log q_{0})^{j}\frac{x}{\log y },\quad x^{1/3}<q_{0}\leq x^{1/(3+j)}. \tag{22}\]
Thus, we would like
\[c_{3}(1+\epsilon_{3}(q_{0})\log q_{0})^{5}<.6 \tag{23}\]
We have checked (23) numerically for primes \(q_{0}<1000\) and it holds for \(q_{0}\geq 241\).
This leaves the case of primes \(>1000\). We have the identity
\[\sum_{q_{0}\leq p\leq q_{1}} \frac{1}{p\log p}\] \[= \frac{-\theta(q_{0}^{-})}{q_{0}(\log q_{0})^{2}}+\frac{\theta(q_{ 1})}{q_{1}(\log q_{1})^{2}}+\int_{q_{0}}^{q_{1}}\theta(t)\left(\frac{1}{t^{2}( \log t)^{2}}+\frac{2}{t^{2}(\log t)^{3}}\right)\,dt,\]
via partial summation, where \(\theta\) is again Chebyshev's function. First assume that \(q_{1}<10^{19}\). Then, using [4], [5], we have \(\theta(t)\leq t\), so that
\[\sum_{q_{0}\leq p\leq q_{1}}\frac{1}{p\log p}<\frac{q_{0}-\theta(q_{0}^{-})}{q _{0}(\log q_{0})^{2}}+\frac{1}{\log q_{0}}-\frac{1}{\log q_{1}}.\]
We also have [4], [5] that \(q_{0}-\theta(q_{0}^{-})<1.95\sqrt{q_{0}}\), so that one can verify that
\[\epsilon_{3}(q_{0})<\frac{1.95}{\sqrt{q_{0}}(\log q_{0})^{2}}\]
and so (23) holds for \(q_{0}>1000\). It remains to consider the cases when \(q_{1}>10^{19}\), which implies \(q_{0}>10^{14}\). Here we use \(|\theta(t)-t|<3.965t/(\log t)^{2}\), which is from [6, Theorem 4.2] or [2, Corollary 11.2]. This shows that (23) holds here as well, completing the proof of Theorem 1.
| ```
Φ(x,y)はx,yの整数ペアに対して、y以下の素因数を持たない整数nを数え上げる関数です。この関数の値が、yが√x以下である場合、多くの場合、Φ(x,y)<0.6x/log y となります。
```
**Explanation**
* **Φ(x,y)**: The function is represented as Φ(x,y) which is the number of integers n in the range [1, x] that are free of prime factors <= y.
* **x,yの整数ペア**: This translates to "pairs of integers x, y".
* **y以下の素因数を持たない整数nを数え上げる関数**: This translates to "a function that counts the integers n in the range [1, x] that are free of prime factors less than or equal to y".
* **この関数の値 |
2301.10484 | Minimal residual methods in negative or fractional Sobolev norms | For numerical approximation the reformulation of a PDE as a residual
minimisation problem has the advantages that the resulting linear system is
symmetric positive definite, and that the norm of the residual provides an a
posteriori error estimator. Furthermore, it allows for the treatment of general
inhomogeneous boundary conditions. In many minimal residual formulations,
however, one or more terms of the residual are measured in negative or
fractional Sobolev norms. In this work, we provide a general approach to
replace those norms by efficiently evaluable expressions without sacrificing
quasi-optimality of the resulting numerical solution. We exemplify our approach
by verifying the necessary inf-sup conditions for four formulations of a model
second order elliptic equation with inhomogeneous Dirichlet and/or Neumann
boundary conditions. We report on numerical experiments for the Poisson problem
with mixed inhomogeneous Dirichlet and Neumann boundary conditions in an
ultra-weak first order system formulation. | Harald Monsuur, Rob Stevenson, Johannes Storn | 2023-01-25T09:41:57 | http://arxiv.org/abs/2301.10484v2 | # Minimal residual methods in negative or fractional Sobolev norms
###### Abstract.
For numerical approximation the reformulation of a PDE as a residual minimisation problem has the advantages that the resulting linear system is symmetric positive definite, and that the norm of the residual provides an a posteriori error estimator. Furthermore, it allows for the treatment of general inhomogeneous boundary conditions. In many minimal residual formulations, however, one or more terms of the residual are measured in negative or fractional Sobolev norms. In this work, we provide a general approach to replace those norms by efficiently evaluable expressions without sacrificing quasi-optimality of the resulting numerical solution. We exemplify our approach by verifying the necessary inf-sup conditions for four formulations of a model second order elliptic equation with inhomogeneous Dirichlet and/or Neumann boundary conditions. We report on numerical experiments for the Poisson problem with mixed inhomogeneous Dirichlet and Neumann boundary conditions in an ultra-weak first order system formulation.
Key words and phrases:Least squares methods, Fortin interpolator, a posteriori error estimator, inhomogeneous boundary conditions, quasi-optimal approximation 2020 Mathematics Subject Classification: 35B35, 35B45, 65N30 This research has been supported by the Netherlands Organization for Scientific Research (NWO) under contract no. SH-208-11, by the NSF Grant DMS ID 1720297, and by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - SFB 1283/2 2021 - 317210226.
The bilinear form at the left hand side is bounded, symmetric, and coercive, so that
\[\|u-u^{\delta}\|_{X}\leq\|G\|_{\mathcal{L}(X,V)}\|G^{-1}\|_{\mathcal{L}(V,X)} \inf_{w\in X^{\delta}}\|u-w\|_{X}, \tag{1.2}\]
i.e., \(u^{\delta}\) is a _quasi-optimal_ approximation to \(u\) from \(X^{\delta}\).
Additional advantages of a MINRES discretisation are that the system matrix resulting from (1.1) is always symmetric positive definite, and that the method comes with an efficient and reliable computable a posteriori error estimator
\[\|f-Gu^{\delta}\|_{V}\in\big{[}\|G^{-1}\|_{\mathcal{L}(V,X)}^{-1}\|u-u^{\delta} \|_{X},\|G\|_{\mathcal{L}(X,V)}\|u-u^{\delta}\|_{X}\big{]}.\]
For more information about MINRES discretisations we refer to the monograph [1], where apart from general theory, many applications are discussed, including (but not restricted to) scalar second order elliptic boundary value problems, Stokes equations, and the equations of linear elasticity.
As explained in [1, SS2.2.2], for a MINRES discretisation to be competitive it should be 'practical'. With that it is meant that \(V\) should not be a fractional or negative order Sobolev space, or when it is a Cartesian product, neither of its components should be of that kind, and at the same time \(X\) should not be a Sobolev space of order two (or higher) because that would require a globally \(C^{1}\) finite element subspace \(X^{\delta}\). In view of these requirements, a first natural step is to write a 2nd order PDE under consideration as a first order system. It turns out, however, that even then in many applications one or more components of \(V\) are fractional or negative order Sobolev spaces.
First the imposition of inhomogeneous boundary conditions lead to residual terms that are measured in fractional Sobolev spaces. Although the capacity to handle inhomogeneous boundary conditions is often mentioned as an advantage of MINRES methods, until now a fully satisfactory solution how to deal with fractional Sobolev spaces seems not to be available. Second, if one prefers to avoid an additional regularity condition on the forcing term required for the standard 'practical' first order system formulation, one ends up with a residual that is measured in a negative Sobolev norm. Finally, more than one dual norms occur with ultra-weak first order formulations which for example are useful to construct 'robust' discretisations for Helmholtz equations ([1, 2]).
In [1] several possibilities are discussed to find a compromise between having norm equivalence, and so quasi-optimality, and 'practicality', for example by replacing negative or fractional Sobolev norms in the MINRES formulation by mesh-dependent weighted \(L_{2}\)-norms. The topic of the current paper is the replacement of negative or fractional Sobolev norms by computable quantities whilst fully retaining quasi-optimality of the MINRES method.
This paper is organized as follows. In Sect. 2 we give several examples of MINRES formulations of a model scalar second order elliptic boundary value problem, where except for one formulation, one or more terms of the residual are measured in fractional or negative Sobolev spaces. In an abstract setting in Sect. 3 it is shown how such 'impractical' MINRES formulations can be turned into 'practical' ones without compromising quasi-optimality. For the examples from Sect. 2, in Sect. 4 we verify (uniform) inf-sup conditions that are needed for the conversion of the 'impractical' to a 'practical' MINRES formulation. In this section, we also discuss
alternative approaches to handle dual norms ([1]), or to handle singular forcing terms in an already 'practical' MINRES discretisation ([11, 12]). In Sect. 5 we illustrate the theoretical findings with some numerical results, and a conclusion is presented in Sect. 6.
In this paper, by the notation \(C\lesssim D\) we will mean that \(C\) can be bounded by a multiple of \(D\), independently of parameters which \(C\) and \(D\) may depend on, as the discretisation index \(\delta\). Obviously, \(C\gtrsim D\) is defined as \(D\lesssim C\), and \(C\eqsim D\) as \(C\lesssim D\) and \(C\gtrsim D\).
## 2. Examples of MINRES discretisations
The results from this section that concern well-posedness of MINRES formulations, i.e., boundedly invertibility of the operator \(G\), for the case of essential inhomogeneous boundary conditions are taken from [13]. The key to arrive at those results was a lemma that, in the slightly modified version from [13, Lemma 2.7], is recalled below.
**Lemma 2.1**.: _Let \(X\) and \(V_{2}\) be Banach spaces, and \(V_{1}\) be a normed linear space. Let \(T\in\mathcal{L}(X,V_{2})\) be surjective, and let \(G\in\mathcal{L}(X,V_{1})\) be such that \(G|_{\ker T}\in\mathcal{L}\mathrm{is}(\ker T,V_{1})\). Then \((G,T)\in\mathcal{L}\mathrm{is}\big{(}X,V_{1}\times V_{2}\big{)}\)._
On a bounded Lipschitz domain \(\Omega\subset\mathbb{R}^{d}\), where \(d\geq 2\), and closed \(\Gamma_{D},\Gamma_{N}\subset\partial\Omega\), with \(\Gamma_{D}\cup\Gamma_{N}=\partial\Omega\) and \(|\Gamma_{D}\cap\Gamma_{N}|=0\), we consider the following boundary value problem
\[\left\{\begin{array}{cl}-\mathrm{div}\;A\nabla u+Bu=g&\text{ on }\Omega,\\ u=h_{D}&\text{ on }\Gamma_{D},\\ \vec{n}\cdot A\nabla u=h_{N}&\text{ on }\Gamma_{N},\end{array}\right. \tag{2.1}\]
where \(\vec{n}\) is the outward pointing unit vector normal to the boundary, \(B\) is a bounded linear partial differential operator of at most first order, i.e.,
(C.1) \[B\in\mathcal{L}(H^{1}(\Omega),L_{2}(\Omega)),\]
and \(A(\cdot)\in L_{\infty}(\Omega)^{d\times d}\) is real, symmetric with
\[\xi^{\top}A(\cdot)\xi\eqsim\|\xi\|^{2}\quad(\xi\in\mathbb{R}^{d}).\]
We assume that the standard variational formulation of (2.1) for the case of homogeneous Dirichlet boundary conditions is well-posed, i.e., with \(H^{1}_{0,\Gamma_{D}}(\Omega):=\{v\in H^{1}(\Omega)\colon\gamma_{D}v=0\}\), where \(\gamma_{D}\) is the trace operator on \(\Gamma_{D}\), the operator
(C.3) \[G:=w\mapsto(v\mapsto\int_{\Omega}A\nabla w\cdot\nabla v+Bw\,v\,dx)\in \mathcal{L}\mathrm{is}\big{(}H^{1}_{0,\Gamma_{D}}(\Omega),H^{1}_{0,\Gamma_{D }}(\Omega)^{\prime}\big{)}.\lx@note{footnote}{In the case that $\Gamma_{D}=\emptyset$, it can be needed, as when $B=0$, to replace $H^{1}_{0,\Gamma_{D}}(\Omega)=H^{1}(\Omega)$ by $H^{1}(\Omega)/\mathbb{R}$. For simplicity, we do not consider this situation.}\]
With this standard variational formulation, the Neumann boundary condition is natural, and the Dirichlet boundary condition is essential. We are ready to give the first example of a MINRES discretisation.
**Example 2.2** (2nd order weak formulation).: Let \(g\in H^{1}_{0,\Gamma_{D}}(\Omega)^{\prime}\) and \(h_{N}\in H^{-\frac{1}{2}}(\Gamma_{N})=H^{\frac{1}{2}}_{00}(\Gamma_{N})^{\prime}\), where \(H^{\frac{1}{2}}_{00}(\Gamma_{N})=[L_{2}(\Gamma_{N}),H^{1}_{0}(\Gamma_{N})]_{ \frac{1}{2},2}\), so that consequently \(f:=v\mapsto g(v)+\int_{\Gamma_{N}}h_{N}v\,ds\in H^{1}_{0,\Gamma_{D}}(\Omega)^ {\prime}\).
(i). Let \(h_{D}=0\) (or \(\Gamma_{D}=\emptyset\)). For any finite dimensional subspace \(X^{\delta}\subset H^{1}_{0,\Gamma_{D}}(\Omega)\), (C.3) shows that a quasi-optimal MINRES approximation to the solution of (2.1) is
\[u^{\delta}:=\operatorname*{argmin}_{w\in X^{\delta}}\tfrac{1}{2}\|Gw-f\|^{2}_{ H^{1}_{0,\Gamma_{D}}(\Omega)^{\prime}}.\]
(ii). Let \(0\neq h_{D}\in H^{\frac{1}{2}}(\Gamma_{D})\). By surjectivity of \(\gamma_{D}\in\mathcal{L}\big{(}H^{1}(\Omega),H^{\frac{1}{2}}(\Gamma_{D})\big{)}\), Lemma 2.1 shows that \((G,\gamma_{D})\in\mathcal{L}\mathrm{is}\big{(}H^{1}(\Omega),H^{1}_{0,\Gamma_{ D}}(\Omega)^{\prime}\times H^{\frac{1}{2}}(\Gamma_{D})\big{)}\), so that for any finite dimensional subspace \(X^{\delta}\subset H^{1}(\Omega)\),
\[u^{\delta}:=\operatorname*{argmin}_{w\in X^{\delta}}\tfrac{1}{2}\big{(}\|Gw- f\|^{2}_{H^{1}_{0,\Gamma_{D}}(\Omega)^{\prime}}+\|\gamma_{D}w-h_{D}\|^{2}_{H^{ \frac{1}{2}}(\Gamma_{D})}\big{)} \tag{2.1}\]
is a quasi-optimal MINRES approximation to the solution of (2.1).
Introducing \(\vec{p}=A\nabla u\), for the remaining examples we consider the reformulation of (2.1) as the first order system
\[\left\{\begin{array}{rl}\vec{p}-A\nabla u=0&\text{ on }\Omega,\\ Bu-\operatorname{div}\vec{p}=g&\text{ on }\Omega,\\ u=h_{D}&\text{ on }\Gamma_{D},\\ \vec{p}\cdot\vec{n}=h_{N}&\text{ on }\Gamma_{N}.\end{array}\right. \tag{2.2}\]
By measuring the residuals of the first two equations in (2.2) in the'mild' \(L_{2}(\Omega)\)-sense, we obtain the following first order system MINRES or FOSLS discretisation. Both Dirichlet and Neumann boundary conditions are essential ones.
**Example 2.3** (mild formulation).: Let \(g\in L_{2}(\Omega)\).
(i). Let \(h_{D}=0\) (or \(\Gamma_{D}=\emptyset\)), and \(h_{N}=0\) (or \(\Gamma_{N}=\emptyset\)). As shown in [16, Thm. 3.1], the operator
\[G:=(\vec{q},w) \mapsto(\vec{q}-A\nabla w,Bw-\operatorname{div}\vec{q})\] \[\in\mathcal{L}\mathrm{is}\big{(}H_{0,\Gamma_{N}}(\operatorname{ div};\Omega)\times H^{1}_{0,\Gamma_{D}}(\Omega),L_{2}(\Omega)^{d}\times L_{2}( \Omega)\big{)},\]
and so for any finite dimensional subspace \(X^{\delta}\subset H_{0,\Gamma_{N}}(\operatorname{div};\Omega)\times H^{1}_{0, \Gamma_{D}}(\Omega)\),
\[(\vec{p}^{\delta},u^{\delta}):=\operatorname*{argmin}_{(\vec{q},w)\in X^{ \delta}}\tfrac{1}{2}\|G(\vec{q},w)-(0,g)\|^{2}_{L_{2}(\Omega)^{d}\times L_{2}( \Omega)}\]
is a quasi-optimal MINRES approximation to the solution of (2.2).
(ii). Let \(0\neq h_{D}\in H^{\frac{1}{2}}(\Gamma_{D})\) and \(0\neq h_{N}\in H^{-\frac{1}{2}}(\Gamma_{N})\).2 From the surjectivity of the pair of normal trace and trace operators on \(\Gamma_{N}\) or \(\Gamma_{D}\)
Footnote 2: The cases that either \(h_{D}\neq 0\) or \(h_{N}\neq 0\) cause no additional difficulties
\[(\gamma_{N},\gamma_{D})\in\mathcal{L}(H(\operatorname{div};\Omega)\times H^{1 }(\Omega),H^{-\frac{1}{2}}(\Gamma_{N})\times H^{\frac{1}{2}}(\Gamma_{D})),\]
Lemma 2.1 shows that for any finite dimensional subspace \(X^{\delta}\subset H(\operatorname{div};\Omega)\times H^{1}(\Omega)\),
\[(\vec{p}^{\delta},u^{\delta}):=\operatorname*{argmin}_{(\vec{q},w)\in X^{ \delta}}\tfrac{1}{2}\big{(}\|G(\vec{q},w)-(0,g)\|^{2}_{L_{2}(\Omega)^{d}\times L _{2}(\Omega)}\] \[\qquad\qquad\qquad\qquad\qquad+\|\gamma_{N}\vec{q}-h_{N}\|^{2}_{ H^{-\frac{1}{2}}(\Gamma_{N})}+\|\gamma_{D}w-h_{D}\|^{2}_{H^{\frac{1}{2}}(\Gamma_{D})} \big{)}\]
is a quasi-optimal MINRES approximation to the solution of (2.2).
Among the known MINRES formulations of (2.1), the formulation from Example 2.3(i) (so for homogeneous boundary conditions) is the only one that is 'practical' because the residual is minimized in \(L_{2}\)-norm. A disadvantage of this mild formulation is that it only applies to a forcing term \(g\in L_{2}(\Omega)\), whilst the \(H(\text{div};\Omega)\)-norm instead of the more natural \(L_{2}(\Omega)^{d}\)-norm in which the error in \(\vec{p}=A\nabla u\) is measured requires additional smoothness of \(u\) to guarantee a certain convergence rate.
These disadvantages vanish in the following mild-weak formulation, which, however, in unmodified form is impractical. Another approach to overcome the disadvantages of the mild formulation, which is presented in [10, 11], is to replace in the least squares minimization the forcing term \(g\) by a finite element approximation. Later in Remark 4.7, we discuss this idea in detail.
In the following mild-weak formulation the second equation in (2.2) is imposed in an only weak sense. It has the consequence that the Neumann boundary condition is a natural one.
**Example 2.4** (mild-weak formulation).: Let \(g\in H^{1}_{0,\Gamma_{D}}(\Omega)^{\prime}\) and \(h_{N}\in H^{-\frac{1}{2}}(\Gamma_{N})\), so that \(f:=v\mapsto g(v)+\int_{\Gamma_{N}}h_{N}v\,ds\in H^{1}_{0,\Gamma_{D}}(\Omega)^ {\prime}\).
1. Let \(h_{D}=0\) (or \(\Gamma_{D}=\emptyset\)). As shown in [1], the operator
\[G=(G_{1},G_{2}):=(\vec{q},w)\mapsto\big{(}\vec{q}-A\nabla w,v\mapsto\int_{ \Omega}\vec{q}\cdot\nabla v+Bw\,v\,dx\big{)}\]
satisfies
\[\|G(\vec{q},w)\|_{L_{2}(\Omega)^{d}\times H^{1}_{0,\Gamma_{D}}(\Omega)^{ \prime}}\eqsim\|(\vec{q},w)\|_{L_{2}(\Omega)^{d}\times H^{1}(\Omega)} \tag{2.3}\]
\(((\vec{q},w)\in L_{2}(\Omega)^{d}\times H^{1}_{0,\Gamma_{D}}(\Omega))\). It remains to verify surjectivity. Given \((\vec{r},f)\in L_{2}(\Omega)^{d}\times H^{1}_{0,\Gamma_{D}}(\Omega)^{\prime}\), (C.3) shows that there exists a \(w\in H^{1}_{0,\Gamma_{D}}(\Omega)\) with
\[\int_{\Omega}A\nabla w\cdot\nabla v+Bw\,v\,dx=f(v)-\int_{\Omega}\vec{r}\cdot \nabla v\,dx\quad(v\in H^{1}_{0,\Gamma_{D}}(\Omega)).\]
With \(\vec{q}:=\vec{r}+A\nabla w\), we conclude that \(G(\vec{q},w)=(\vec{r},f)\). Surjectivity with (2.3) implies that \(G\in\mathcal{L}\text{is}\big{(}L_{2}(\Omega)^{d}\times H^{1}_{0,\Gamma_{D}}( \Omega),L_{2}(\Omega)^{d}\times H^{1}_{0,\Gamma_{D}}(\Omega)^{\prime}\big{)}\). So for any finite dimensional subspace \(X^{\delta}\subset L_{2}(\Omega)^{d}\times H^{1}_{0,\Gamma_{D}}(\Omega)\),
\[(\vec{p}^{\delta},u^{\delta}):=\operatorname*{argmin}_{(\vec{q},w)\in X^{ \delta}}\tfrac{1}{2}\big{(}\|G_{1}(\vec{q},w)\|^{2}_{L_{2}(\Omega)^{d}}+\|G_{ 2}(\vec{q},w)-f\|^{2}_{H^{1}_{0,\Gamma_{D}}(\Omega)^{\prime}}\big{)}\]
is a quasi-optimal MINRES approximation to the solution of (2.2).
(ii). Let \(0\neq h_{D}\in H^{\frac{1}{2}}(\Gamma_{D})\). From \(L_{2}(\Omega)\times H^{1}(\Omega)\to H^{\frac{1}{2}}(\Gamma_{D})\colon(\vec{q },w)\mapsto\gamma_{D}w\) being surjective, Lemma 2.1 shows that for any finite dimensional subspace \(X^{\delta}\subset L_{2}(\Omega)^{d}\times H^{1}(\Omega)\),
\[(\vec{p}^{\delta},u^{\delta}):=\operatorname*{argmin}_{(\vec{q},w )\in X^{\delta}}\tfrac{1}{2}\big{(}\|G_{1}(\vec{q},w)\|^{2}_{L_{2}(\Omega)^{d }} +\|G_{2}(\vec{q},w)-f\|^{2}_{H^{1}_{0,\Gamma_{D}}(\Omega)^{\prime}}\] \[+\|\gamma_{D}w-h_{D}\|^{2}_{H^{\frac{1}{2}}(\Gamma_{D})}\big{)}\]
is a quasi-optimal MINRES approximation to the solution of (2.2).
Finally, by imposing both the first and second equation in (2.2) in a weak sense we obtain the ultra-weak formulation. In order to do so, first we specify the operator \(B\) from (C.1) to \(B:=w\mapsto\vec{b}\cdot\nabla w+cw\) for some \(\vec{b}\in L_{\infty}(\Omega)^{d}\) and \(c\in L_{\infty}(\Omega)\), and, to avoid additional smoothness conditions on \(\vec{b}\), write the second equation in (2.2) as \(\vec{b}\cdot A^{-1}\vec{p}+cu-\operatorname{div}\vec{p}=g\).
**Example 2.5** (ultra-weak formulation).: Let \(h_{D}\in H^{\frac{1}{2}}(\Gamma_{D})\), so that \(f_{1}:=\vec{z}\mapsto\int_{\Gamma_{D}}h_{D}\vec{z}\cdot\vec{n}\,ds\in H_{0, \Gamma_{N}}(\operatorname{div};\Omega)^{\prime}\), and let \(g\in H^{1}_{0,\Gamma_{D}}(\Omega)^{\prime}\), and \(h_{N}\in H^{-\frac{1}{2}}(\Gamma_{N})\), so that \(f_{2}:=v\mapsto g(v)+\int_{\Gamma_{N}}h_{N}v\,ds\in H^{1}_{0,\Gamma_{D}}(\Omega )^{\prime}\). As shown in [12, Thm. 3.3],
\[G:=(\vec{q},w)\mapsto\Big{(}\vec{z}\mapsto\int_{\Omega}A^{-1} \vec{q}\cdot\vec{z}+w\operatorname{div}\vec{z}\,dx,v\mapsto\int_{\Omega}(\vec{ b}\cdot A^{-1}\vec{q}+cw)v+\vec{q}\cdot\nabla v\,dx\Big{)}\\ \in\mathcal{L}\text{is}\big{(}L_{2}(\Omega)^{d}\times L_{2}( \Omega),H_{0,\Gamma_{N}}(\operatorname{div};\Omega)^{\prime}\times H^{1}_{0, \Gamma_{D}}(\Omega)^{\prime}\big{)}.\]
Consequently, for any finite dimensional subspace \(X^{\delta}\subset L_{2}(\Omega)^{d}\times L_{2}(\Omega)\),
\[(\vec{p}^{\delta},u^{\delta}):=\operatorname*{argmin}_{(\vec{q},w)\in X^{ \delta}}\tfrac{1}{2}\|G(\vec{q},w)-(f_{1},f_{2})\|^{2}_{H_{0,\Gamma_{N}}( \operatorname{div};\Omega)^{\prime}\times H^{1}_{0,\Gamma_{D}}(\Omega)^{ \prime}}\]
is a quasi-optimal MINRES approximation to the solution of (2.2).
## 3. Turning an impractical MINRES formulation into a practical one
### Dealing with a dual norm
In our examples, the MINRES discretisations are of the form
\[u^{\delta}:=\operatorname*{argmin}_{z\in X^{\delta}}\tfrac{1}{2}\big{(}\sum_{ i=1}^{k}\|G_{i}z-f_{i}\|_{Y_{i}^{\prime}}^{2}+\sum_{i=k+1}^{m}\|G_{i}z-f_{i}\|_{Y_{i} }^{2}\big{)} \tag{3.1}\]
with \(0\leq k\leq m\), \(m\geq 1\), Hilbert spaces \(X\) and \((Y_{i})_{1\leq i\leq m}\), \(G=(G_{i})_{1\leq i\leq m}\in\mathcal{L}\text{is}(X,Y_{1}^{\prime}\times\cdots \times Y_{k}^{\prime}\times Y_{k+1}\times\cdots\times Y_{m})\), and a finite dimensional subspace \(X^{\delta}\subset X\), and where, for \(1\leq i\leq k\), the spaces \(Y_{i}\) are such that the Riesz map \(Y_{i}^{\prime}\to Y_{i}\) cannot be efficiently evaluated (i.e., \(Y_{i}\) is not an \(L_{2}\)-space).
In Examples 2.2(ii), 2.3(ii), and 2.4(ii), we furthermore encountered a residual component that was measured in \(\|\cdot\|_{H^{\frac{1}{2}}(\Gamma_{D})}\), which norm cannot be efficiently evaluated. By writing \(\|\cdot\|_{H^{\frac{1}{2}}(\Gamma_{D})}=\|\cdot\|_{\vec{H}^{-\frac{1}{2}}( \Gamma_{D})^{\prime}}\), where \(\tilde{H}^{-\frac{1}{2}}(\Gamma_{D}):=H^{\frac{1}{2}}(\Gamma_{D})^{\prime}\), and handling analogously for all Sobolev norms with positive fractional orders, we may assume that all _non-dual_ norms \(\|\cdot\|_{Y_{i}}\) in (3.1) are efficiently evaluable.
_Remark 3.1_.: A previously proposed approach to deal with \(\|\cdot\|_{H^{\frac{1}{2}}(\Gamma_{D})}\) is to replace it by an efficiently evaluable semi-norm that on a selected finite element subspace is equivalent to \(\|\cdot\|_{H^{\frac{1}{2}}(\Gamma_{D})}\) (see [13]). The so modified least squares functional is then only equivalent to the original one modulo a data-oscillation term, so that quasi-optimality is not guaranteed.
The dual norms \(\|\cdot\|_{Y_{i}^{\prime}}\) for \(1\leq i\leq k\) in (3.1) cannot be evaluated, which makes the discretisation (3.1) impractical. To solve this, we will select finite dimensional
subspaces \(Y^{\delta}_{i}=Y^{\delta}_{i}(X^{\delta})\subset Y\) such that
\[\gamma^{\delta}_{i}:=\inf_{\{z\in X^{\delta}:\,G_{i}z\neq 0\}}\frac{\sup_{0\neq y _{i}\in Y^{\delta}_{i}}\frac{|(G_{i}z)(y_{i})|}{\|y_{i}\|_{Y_{i}}}}{\|G_{i}z\|_ {Y^{\delta}_{i}}}>0, \tag{3.2}\]
and _replace_ the MINRES discretisation (3.1) by
\[u^{\delta}:=\operatorname*{argmin}_{z\in X^{\delta}}\tfrac{1}{2}\big{(}\sum_{ i=1}^{k}\sup_{0\neq y_{i}\in Y^{\delta}_{i}}\frac{|(G_{i}z-f_{i})(y_{i})|^{2}}{ \|y_{i}\|_{Y_{i}}^{2}}+\sum_{i=k+1}^{m}\|G_{i}z-f_{i}\|_{Y_{i}}^{2}\big{)}. \tag{3.3}\]
To analyze (3.3), for notational convenience in the remainder of this subsection for \(k+1\leq i\leq m\) we rewrite \(\|G_{i}z-f_{i}\|_{Y_{i}}^{2}\) as \(\|R_{i}^{-1}(G_{i}z-f_{i})\|_{Y^{\delta}_{i}}^{2}\), where \(R_{i}\in\mathcal{L}\text{is}(Y^{\prime}_{i},Y_{i})\) is the Riesz map defined by \(f(v)=\langle R_{i}f,v\rangle_{Y_{i}}\). Redefining, for \(k+1\leq i\leq m\), \(G_{i}:=R_{i}^{-1}G_{i}\) and \(f_{i}:=R_{i}^{-1}f_{i}\), and setting \(Y^{\delta}_{i}:=Y_{i}\) (so that \(\gamma_{i}=1\)), with \(G:=(G,\ldots,G_{m})\), \(f:=(f_{1},\ldots,f_{m})\), \(Y^{\delta}:=Y^{\delta}_{1}\times\cdots\times Y^{\delta}_{m}\), \(Y:=Y_{1}\times\cdots\times Y_{m}\), the solution of (3.3) is equivalently given by
\[u^{\delta}:=\operatorname*{argmin}_{z\in X^{\delta}}\tfrac{1}{2}\sup_{0\neq y \in Y^{\delta}}\frac{|(Gz-f)(y)|^{2}}{\|y\|_{Y}^{2}}. \tag{3.4}\]
With the newly defined \((G_{i})_{k+1\leq i\leq m}\), we have \(G\in\mathcal{L}\text{is}(X,Y^{\prime})\).
**Lemma 3.2**.: _With \(G\) and \((\gamma^{\delta}_{i})_{1\leq i\leq m}\) defined above, and_
\[\gamma^{\delta}:=\inf_{\{z\in X^{\delta}:\,Gz\neq 0\}}\frac{\sup_{0\neq y \in Y^{\delta}}\frac{|(Gz)(y)|}{\|y\|_{Y}}}{\|Gz\|_{Y^{\prime}}},\]
_it holds that \(\gamma^{\delta}\geq\min_{1\leq i\leq m}\gamma^{\delta}_{i}\)._
Proof.: For each \(z\in X^{\delta}\), for \(1\leq i\leq m\) there exists a \(y_{i}\in Y^{\delta}_{i}\) with \(\|y_{i}\|_{Y_{i}}=\|G_{i}z\|_{Y^{\prime}_{i}}\) and \((G_{i}z)(y_{i})\geq\gamma^{\delta}_{i}\|G_{i}z\|_{Y^{\delta}_{i}}^{2}\). So for \(y:=(y_{i})_{1\leq i\leq m}\in Y^{\delta}\),
\[(Gz)(y)=\sum_{i=1}^{m}G_{i}(z)(y_{i})\geq\min_{1\leq i\leq m}\gamma^{\delta}_ {i}\sum_{i=1}^{m}\|G_{i}z\|_{Y^{\prime}_{i}}^{2}=\min_{1\leq i\leq m}\gamma^{ \delta}_{i}\|Gz\|_{Y^{\prime}}\|y\|_{Y},\]
which completes the proof.
**Theorem 3.3**.: _Let \(\gamma^{\delta}>0\). Setting \(\|\cdot\|_{X}:=\|G\cdot\|_{Y^{\prime}}\), for \(u=G^{-1}f\) and \(u^{\delta}\) from (3.4), it holds that_
\[\inf_{u\in X\setminus X^{\delta}}\frac{\inf_{w\in X^{\delta}}\|\!\|u-w\|\!\|_{X }}{\|\!\|u-u^{\delta}\|\!\|_{X}}\ =\ \gamma^{\delta}, \tag{3.5}\]
_and so_
\[\|u-u^{\delta}\|_{X}\leq\frac{\|G\|_{\mathcal{L}(X,Y^{\prime})}\|G^{-1}\|_{ \mathcal{L}(Y^{\prime},X)}}{\gamma^{\delta}}\inf_{w\in X^{\delta}}\|u-w\|_{X}. \tag{3.6}\]
Proof.: First we recall from [1, Prop. 2.2] (building on the seminal work [1]), that the MINRES discretisation (3.4) can equivalently be written as a Petrov-Galerkin discretisation: With \(R^{\delta}\in\mathcal{L}\text{is}(Y^{\delta^{\prime}},Y^{\delta})\) defined by \(f(v)=\langle R^{\delta}f,v\rangle_{Y}\), we have
\[\tfrac{1}{2}\sup_{0\neq y\in Y^{\delta}}\frac{|(Gz-f)(y)|^{2}}{\|y\|_{Y}^{2}}= \tfrac{1}{2}\sup_{0\neq y\in Y^{\delta}}\frac{\langle R^{\delta}(Gz-f),y\rangle _{Y}^{2}}{\|y\|_{Y}^{2}}=\tfrac{1}{2}\|R^{\delta}(Gz-f)\|_{Y}^{2},\]
so that (3.4) is equivalent to finding \(u^{\delta}\in X^{\delta}\) that satisfies
\[0=\langle R^{\delta}(Gu^{\delta}-f),R^{\delta}Gw\rangle=(Gu^{\delta}-f)(R^{ \delta}Gw)\quad(w\in X^{\delta}).\]
Splitting \(Y^{\delta}\) into the test space \(\operatorname{ran}R^{\delta}G|_{X^{\delta}}\) and its orthogonal complement, one infers that for any \(y\) in the latter space and \(z\in X^{\delta}\), it holds that \((Gz)(y)=0\), so that \(\sup_{0\neq y\in Y^{\delta}}\frac{|(Gz)(y)|}{\|y\|_{Y}}=\sup_{0\neq y\in \operatorname{ran}R^{\delta}G|_{X^{\delta}}}\frac{|(Gz)(y)|}{\|y\|_{Y}}\), and thus that the value of \(\gamma^{\delta}\) does not change when the space \(Y^{\delta}\) in its definition is replaced by \(R^{\delta}G|_{X^{\delta}}\).
Using that with \(X\) being equipped with \(\|\cdot\|_{X}\), \(G\in\mathcal{L}\mathrm{is}(X,Y^{\prime})\) is an isometry, an application of [12, Remark 3.2] or [13, Sect. 2.1] concerning Petrov-Galerkin discretisations shows (3.5). The final result follows easily.
### Saddle-point formulation
Considering (3.1), notice that the solution \(u\in X\) of \(Gu=f\) is equivalently given as
\[u:=\operatorname*{argmin}_{z\in X}\tfrac{1}{2}\big{(}\sum_{i=1}^{k}\|G_{i}z-f_ {i}\|_{Y_{i}^{\prime}}^{2}+\sum_{i=k+1}^{m}\|G_{i}z-f_{i}\|_{Y_{i}}^{2}\big{)}.\]
This \(u\) solves the Euler-Lagrange equations
\[\sum_{i=1}^{k}\langle f_{i}-G_{i}u,G_{i}\underline{u}\rangle_{Y_{i}^{\prime}}+ \sum_{i=k+1}^{m}\langle f_{i}-G_{i}u,G_{i}\underline{u}\rangle_{Y_{i}}=0\quad (\underline{u}\in X).\]
For \(1\leq i\leq k\) we set \(\lambda_{i}:=R_{i}(f_{i}-G_{i}u)\). Using that \(\langle g,h\rangle_{Y_{i}^{\prime}}=\langle R_{i}g,R_{i}h\rangle_{Y_{i}}\), we arrive at the equivalent problem of finding \((\lambda_{1},\ldots,\lambda_{k},u)\in Y_{1}\times\cdots\times Y_{k}\times X\) that solves
\[\sum_{i=1}^{k}\langle\lambda_{i},\lambda_{i}\rangle_{Y_{i}}+\sum _{i=1}^{k}(G_{i}u)(\lambda_{i}) =\sum_{i=1}^{k}f_{i}(\lambda_{i}) ((\lambda_{1},\ldots,\lambda_{k})\in Y_{1}\times\cdots\times Y_{k}),\] \[\sum_{i=1}^{k}(G_{i}\underline{u})(\lambda_{i})-\sum_{i=k+1}^{m} \langle G_{i}u,G_{i}\underline{u}\rangle_{Y_{i}} =-\sum_{i=k+1}^{m}\langle f_{i},G_{i}\underline{u}\rangle_{Y_{i}} (\underline{u}\in X).\]
Completely analogously, the MINRES solution \(u^{\delta}\in X^{\delta}\) of (3.3) is the last component of the solution \((\lambda_{1}^{\delta},\ldots,\lambda_{k}^{\delta},u^{\delta})\in Y_{1}^{ \delta}\times\cdots\times Y_{k}^{\delta}\times X^{\delta}\) that solves the finite dimensional saddle-point
\[\begin{split}&\sum_{i=1}^{k}\langle\lambda_{i}^{\delta},\lambda_{ i}\rangle_{Y_{i}}+\sum_{i=1}^{k}(G_{i}u^{\delta})(\underline{\lambda}_{i})= \sum_{i=1}^{k}f_{i}(\lambda_{i})\quad((\lambda_{1},\ldots,\underline{\lambda}_{ k})\in Y_{1}^{\delta}\times\cdots\times Y_{k}^{\delta}),\\ &\sum_{i=1}^{k}(G_{i}\underline{u})(\lambda_{i}^{\delta})-\sum_{i =k+1}^{m}\langle G_{i}u^{\delta},G_{i}\underline{u}\rangle_{Y_{i}}=-\sum_{i=k+1 }^{m}\langle f_{i},G_{i}\underline{u}\rangle_{Y_{i}}\quad(\underline{u}\in X^ {\delta}).\end{split} \tag{3.7}\]
Solving this saddle-point can provide a way to determine \(u^{\delta}\) computationally.
### Reduction to a symmetric positive definite system
It may however happen that one or more scalar products \(\langle\cdot,\cdot\rangle_{Y_{i}}\) on the finite dimensional subspaces \(Y_{i}^{\delta}\) for \(1\leq i\leq k\) are not (efficiently) evaluable, as when \(Y_{i}\) is a fractional Sobolev space. Even when all these scalar products are evaluable, solving a saddle point problem as (3.7) is more costly than solving a symmetric positive definite system as with a usual 'practical' MINRES discretisation, where typically all residual components are measured in \(L_{2}\)-norms.
Therefore, for \(1\leq i\leq k\), let \(K_{i}^{\delta}=K_{i}^{\delta^{\prime}}\in\mathcal{L}\mathrm{is}(Y_{i}^{\delta^{ \prime}},Y_{i}^{\delta})\) be an operator whose application can be computed efficiently. Such an operator could be called a preconditioner for \(A_{i}^{\delta}\in\mathcal{L}\mathrm{is}(Y_{i}^{\delta},Y_{i}^{\delta^{\prime}})\) defined by \((A_{i}^{\delta}v)(\underline{v})=\langle v,\underline{v}\rangle_{Y_{i}}\). We use \(K_{i}^{\delta}\) to define the following alternative scalar product on \(Y_{i}^{\delta}\),
\[\langle v,\underline{v}\rangle_{Y_{i}^{\delta}}:=((K_{i}^{\delta})^{-1}v)( \underline{v})\quad(v,\underline{v}\in Y_{i}^{\delta}),\]
whose corresponding norm \(\|\cdot\|_{Y_{i}^{\delta}}\) satisfies
\[\lambda_{\min}(K_{i}^{\delta}A_{i}^{\delta})\|\cdot\|_{Y_{i}^{\delta}}^{2}\leq \|\cdot\|_{Y_{i}}^{2}\leq\lambda_{\max}(K_{i}^{\delta}A_{i}^{\delta})\|\cdot\| _{Y_{i}^{\delta}}^{2}. \tag{3.8}\]
_Remark 3.4_.: Given a basis \(\Phi_{i}\) for \(Y_{i}^{\delta}\), with \(\mathcal{F}_{i}:=g\mapsto g(\Phi_{i})\in\mathcal{L}\mathrm{is}(Y_{i}^{\delta^{ \prime}},\mathbb{R}^{\#\Phi_{i}})\), and so \(\mathcal{F}_{i}^{\prime}\colon\mathbf{w}\mapsto\mathbf{w}^{\top}\Phi_{i}\in \mathcal{L}\mathrm{is}(\mathbb{R}^{\#\Phi_{i}},Y_{i}^{\delta})\), \(\mathbf{A}_{i}:=\mathcal{F}_{i}A_{i}^{\delta}\mathcal{F}_{i}^{\prime}\) is known as a stiffness matrix. Given some symmetric positive definite \(\mathbf{K}_{i}\eqsim\mathbf{A}_{i}^{-1}\), which is more appropriately called a preconditioner, setting \(K_{i}^{\delta}:=\mathcal{F}_{i}^{\prime}\mathbf{K}_{i}\mathcal{F}_{i}\) gives \(\sigma(K_{i}^{\delta}A_{i}^{\delta})=\sigma(\mathbf{K}_{i}\mathbf{A}_{i})\).
We now _replace_ (3.3) by
\[u^{\delta}:=\operatorname*{argmin}_{z\in X^{\delta}}\tfrac{1}{2}\big{(}\sum_{ i=1}^{k}\sup_{0\neq y_{i}\in Y_{i}^{\delta}}\frac{|(G_{i}z-f_{i})(y_{i})|^{2}}{ \|y_{i}\|_{Y_{i}^{\delta}}^{2}}+\sum_{i=k+1}^{m}\|G_{i}z-f_{i}\|_{Y_{i}}^{2} \big{)},\]
which is a _fully practical_ MINRES discretisation. Indeed by making the corresponding replacement of \(\langle\lambda_{i}^{\delta},\underline{\lambda}_{i}\rangle_{Y_{i}}\) by \(((K_{i}^{\delta})^{-1}\lambda_{i}^{\delta})(\underline{\lambda}_{i})\) in (3.7), and subsequently eliminating \(\lambda_{1}^{\delta},\ldots,\lambda_{k}^{\delta}\) from the resulting system, one infers that this latter \(u^{\delta}\) can be computed as the solution in \(X^{\delta}\) of the symmetric positive definite system
\[\sum_{i=1}^{k}(G_{i}\underline{u})(K_{i}^{\delta}(G_{i}u^{\delta}-f_{i}))+\sum _{i=k+1}^{m}\langle G_{i}\underline{u},G_{i}u^{\delta}-f_{i}\rangle_{Y_{i}}=0 \quad(\underline{u}\in X^{\delta}). \tag{3.9}\]
**Theorem 3.5**.: _Let \(\gamma^{\delta}>0\). Then with \(M^{\delta}:=\max\big{(}1,\max_{1\leq i\leq k}\lambda_{\max}(K_{i}^{\delta}A_{ i}^{\delta})\big{)}\), \(m^{\delta}:=\min\big{(}1,\min_{1\leq i\leq k}\lambda_{\min}(K_{i}^{\delta}A_{ i}^{\delta})\big{)}\), \(u^{\delta}\) from (3.9) satisfies_
\[\|u-u^{\delta}\|\times\tfrac{M^{\delta}}{m^{\delta}}\tfrac{\|G\|_{\mathcal{L}( X,\gamma^{\prime})}\|^{\mathcal{G}-1}\|_{\mathcal{L}(Y^{\prime},X)}}{\gamma^{ \delta}}\inf_{w\in X^{\delta}}\|u-w\|_{X}.\]
Proof.: By writing, for \(1\leq i\leq k\), \(Y_{i}\) as the sum of \(Y_{i}^{\delta}\) and its \(\langle\cdot,\cdot\rangle_{Y_{i}}\)-orthogonal complement, and by replacing the \(\|\cdot\|_{Y_{i}}\)-norm on the \(Y_{i}^{\delta}\)-component by the \(\|\cdot\|_{Y_{i}^{\delta}}\)-norm, and with that modifying the norm on \(Y_{i}\), the MINRES solution \(u^{\delta}\) from (3.9) is of the form of the MINRES solution from (3.3), and so the bound (3.6) from Theorem 3.3 can be applied. Since we have modified the norm on \(Y_{i}\), and thus the one on \(Y\), when doing so we must interpret the factors \({}^{\prime}\|G\|_{\mathcal{L}(X,Y^{\prime})}\), \({}^{\prime}\|G^{-1}\|_{\mathcal{L}(Y^{\prime},X)}\), and \({}^{\prime}\gamma^{\delta^{\prime}}\) in the upper bound in (3.6) w.r.t. the modified norm on \(Y\). Denoting this norm by \(\|\cdot\|_{Y}\), from (3.8) it follows that
\[m^{\delta}\|\cdot\|_{Y}^{2}\leq\|\cdot\|_{Y}^{2}\leq M^{\delta}\|\cdot\|_{Y}^{ 2}.\]
By a straightforward application of these inequalities to derive upper bounds for these modified factors in terms of the original ones, one completes the proof.
Notice that Theorem 3.5 generalizes (3.6) from Theorem 3.3 (indeed, take \(K_{i}^{\delta}=(A_{i}^{\delta})^{-1}\)), which in turn generalized (1.2) (take \(Y_{i}^{\delta}=Y_{i}\)).
The bilinear form \((w,\tilde{w})\mapsto\sum_{i=1}^{k}(G_{i}\bar{w})(K_{i}^{\delta}G_{i}w)+\sum_{i=k+1 }^{m}\langle G_{i}\bar{w},G_{i}w\rangle_{Y_{i}}\) on \(X\times X\) is symmetric, bounded (with constant \(M^{\delta}\|G\|_{\mathcal{L}(X,Y^{\prime})}^{2}\)), and, restricted to \(X^{\delta}\times X^{\delta}\), coercive (with constant \(\frac{(m^{\delta})^{2}}{M^{\delta}}\|G^{-1}\|_{\mathcal{L}(Y^{\prime},X)}^{-2} (\gamma^{\delta})^{2}\)). The way to solve (3.9) is by the application of the preconditioned conjugate gradient method, for some self-adjoint preconditioner in \(\mathcal{L}\mathrm{is}(X^{\delta^{\prime}},X^{\delta})\).
### Fortin interpolators and a posteriori error estimation
As is well known, validity of the inf-sup condition \(\gamma_{i}^{\delta}>0\) in (3.2) is equivalent to existence of a Fortin interpolator. The following formulation from [11, Prop. 5.1] gives a precise quantitative statement, whereas it does not require injectivity of \(G_{i}\) which is not guaranteed in our applications.
**Theorem 3.6**.: _Let \(G_{i}\in\mathcal{L}(X,Y_{i}^{\prime})\). Assuming \(G_{i}X^{\delta}\neq\{0\}\) and \(Y_{i}^{\delta}\neq\{0\}\), let_
\[\Pi_{i}^{\delta}\in\mathcal{L}(Y_{i},Y_{i}^{\delta})\text{ with }(G_{i}X^{\delta}) \big{(}(\mathrm{Id}-\Pi_{i}^{\delta})Y_{i}\big{)}=0. \tag{3.10}\]
_Then \(\gamma_{i}^{\delta}\geq\|\Pi_{i}^{\delta}\|_{\mathcal{L}(Y_{i},Y_{i})}^{-1}\)._
_Conversely, when \(\gamma_{i}^{\delta}>0\), then there exists a \(\Pi_{i}^{\delta}\) as in (3.10), being even a projector onto \(Y_{i}^{\delta}\), with \(\|\Pi_{i}^{\delta}\|_{\mathcal{L}(Y_{i},Y_{i})}^{-1}=\gamma_{i}^{\delta}\)._
As mentioned in the introduction, an advantage of a MINRES discretisation is that the norm of the residual is an efficient and reliable a posteriori estimator of the norm of the error. In the setting (3.1), where \(G\in\mathcal{L}\mathrm{is}(X,V)\) with \(V=Y_{1}^{\prime}\times\cdots\times Y_{k}^{\prime}\times Y_{k+1}\times\cdots \times Y_{m}\), and so, when \(k>0\), one or more components of the residual are measured in dual norms, this a posteriori estimator is not computable. To arrive at a practical MINRES discretisation, we have replaced these dual norms by computable discretised dual norms, and nevertheless ended up with having quasi-optimal approximations (see Theorem 3.5). When it comes to a posteriori error estimation, however, there is some price to be paid. As we will see below, our computable posteriori estimator will only be reliable modulo a data-oscillation term. A similar analysis in the context of DPG methods can already be found in [10].
Let \(w\in X^{\delta}\). Then
\[\|u-w\|_{X}\in\big{[}\|G\|_{\mathcal{L}(X,V)}^{-1}\|f-Gw\|_{V},\|G^{-1}\|_{ \mathcal{L}(X,V)}\|f-Gw\|_{V}\big{]}, \tag{3.11}\]
where
\[\|f-Gw\|_{V}^{2}=\sum_{i=1}^{k}\|f_{i}-G_{i}w\|_{Y_{i}^{\prime}}^{2}+\sum_{i=k +1}^{m}\|f_{i}-G_{i}w\|_{Y_{i}}^{2}.\]
For \(1\leq i\leq k\), let \(\Pi_{i}^{\delta}\) be a valid Fortin interpolator. Then for \(\tilde{y}_{i}\in Y_{i}\),
\[|(f_{i}-G_{i}w)(\tilde{y}_{i})|\leq|(f_{i}-G_{i}w)(\Pi_{i}^{\delta }\tilde{y}_{i})|+|f_{i}((\mathrm{Id}-\Pi_{i}^{\delta})\tilde{y}_{i})|\] \[\leq\|\Pi_{i}^{\delta}\tilde{y}_{i}\|_{Y_{i}^{\delta}}\sup_{0\neq y _{i}\in Y_{i}^{\delta}}\frac{|(f_{i}-G_{i}w)(y_{i})|}{\|y_{i}\|_{Y_{i}^{\delta} }}+\|(\mathrm{Id}-\Pi_{i}^{\delta^{\prime}})f_{i}\|_{Y_{i}^{\delta}}\|\tilde{y }_{i}\|_{Y_{i}}\] \[\leq\Big{(}\|\Pi_{i}^{\delta}\|_{\mathcal{L}(Y_{i},Y_{i})}\lambda _{\min}(K_{i}^{\delta}A_{i}^{\delta})^{-\frac{1}{2}}\sup_{0\neq y_{i}\in Y_{i}^ {\delta}}\frac{|(f_{i}-G_{i}w)(y_{i})|}{\|y_{i}\|_{Y_{i}^{\delta}}}+\|(\mathrm{ Id}-\Pi_{i}^{\delta^{\prime}})f_{i}\|_{Y_{i}^{\delta}}\Big{)}\|\tilde{y}_{i}\|_{Y_{i} }. \tag{3.12}\]
From (3.11)-(3.12) one easily infers the upper bound for \(\|u-w\|_{X}^{2}\) given in the following proposition, whereas the derivation of the lower bound is easier.
**Proposition 3.7**.: _For \(w\in X^{\delta}\), the computable (squared) estimator_
\[\mathcal{E}^{\delta}(w,f)^{2}:=\sum_{i=1}^{k}\sup_{0\neq y_{i}\in Y_{i}^{\delta }}\frac{|(f_{i}-G_{i}w)(y_{i})|^{2}}{\|y_{i}\|_{Y_{i}^{\delta}}^{2}}+\sum_{i=k+ 1}^{m}\|f_{i}-G_{i}w\|_{Y_{i}}^{2}\]
_satisfies_
\[\|G\|_{\mathcal{L}(X,V)}^{-2}\min\big{(}1,\min_{1\leq i\leq k} \lambda_{\max}(K_{i}^{\delta}A_{i}^{\delta})^{-1}\big{)}\mathcal{E}^{\delta}( w,f)^{2}\leq\|u-w\|_{X}^{2}\leq\] \[\|G^{-1}\|_{\mathcal{L}(V,X)}^{2}\max\big{(}1,2\max_{1\leq i\leq k }\lambda_{\min}(K_{i}^{\delta}A_{i}^{\delta})^{-1}\|\Pi_{i}^{\delta}\|_{ \mathcal{L}(Y_{i},Y_{i})}^{2}\big{)}\mathcal{E}^{\delta}(w,f)^{2}\] \[+2\|G^{-1}\|_{\mathcal{L}(V,X)}^{2}\sum_{i=1}^{k}\|(\operatorname {Id}-\Pi_{i}^{\delta^{\prime}})f_{i}\|_{Y_{i}^{\prime}}^{2}.\]
_Remark 3.8_ (Bounding the oscillation term).: By taking \(\Pi_{i}^{\delta}\) being the Fortin projector with \(\|\Pi_{i}^{\delta}\|_{\mathcal{L}(Y_{i},Y_{i})}=1/\gamma_{i}^{\delta}\), for \(\{0\}\subsetneq Y_{i}^{\delta}\subsetneq Y_{i}\) it holds that
\[\|(\operatorname{Id}-\Pi_{i}^{\delta^{\prime}})f_{i}\|_{Y_{i}^{\prime}}=\sup_ {0\neq y_{i}\in Y_{i}}\frac{|f_{i}((\operatorname{Id}-\Pi_{i}^{\delta})y_{i}) |}{\|y_{i}\|_{Y_{i}}}=\]
\[\sup_{0\neq y_{i}\in Y_{i}}\inf_{0\neq w\in X^{\delta}}\frac{|G_{i}(u-w)(( \operatorname{Id}-\Pi_{i}^{\delta})y_{i})|}{\|y_{i}\|_{Y_{i}}}\leq\frac{1}{ \gamma_{i}^{2}}\|G_{i}\|_{\mathcal{L}(X,Y_{i}^{\prime})}\inf_{0\neq w\in X^{ \delta}}\|u-w\|_{X},\]
and so
\[\operatorname{osc}^{\delta}(f):=\sqrt{\sum_{i=1}^{k}\|(\operatorname{Id}-\Pi _{i}^{\delta})^{\prime}f_{i}\|_{Y_{i}^{\prime}}^{2}}\leq\|G\|_{\mathcal{L}(X,V )}\sqrt{\sum_{i=1}^{k}\frac{1}{\gamma_{i}^{2}}}\inf_{0\neq w\in X^{\delta}}\|u -w\|_{X}.\]
In other words, the data-oscillation is bounded by a multiple of the best approximation error.
It would be even better when, for \(1\leq i\leq k\), \(Y_{i}^{\delta}\) is chosen such that it allows for the construction of a (uniformly bounded) Fortin interpolator \(\Pi_{i}^{\delta}\) such that, for general, sufficiently smooth \(u\) and \(f\), \(\operatorname{osc}^{\delta}(f)\) is of higher order than \(\inf_{0\neq w\in X^{\delta}}\|u-w\|_{X}\), so that besides being an efficient estimator one can expect that in any case asymptotically \(\mathcal{E}^{\delta}(w,f)\) is also a reliable one.
_Remark 3.9_ (Computing \(\mathcal{E}^{\delta}(u^{\delta},f)\)).: If \(w=u^{\delta}\) is the MINRES solution from (3.7), then the term \(\sup_{0\neq y_{i}\in Y_{i}^{\delta}}\frac{|(f_{i}-G_{i}u^{\delta})(y_{i})|^{2} }{\|y_{i}\|_{Y_{i}^{\delta}}^{2}}\) in the expression for \(\mathcal{E}^{\delta}(u^{\delta},f)^{2}\) is equal to \(\|\lambda_{i}^{\delta}\|_{Y_{i}}^{2}\).
If \(w=u^{\delta}\) is the MINRES solution from the symmetric positive definite system (3.9), then \(\sup_{0\neq y_{i}\in Y_{i}^{\delta}}\frac{|(f_{i}-G_{i}u^{\delta})(y_{i})|^{2} }{\|y_{i}\|_{Y_{i}^{\delta}}^{2}}\) is equal to \((G_{i}u^{\delta}-f_{i})(K_{i}^{\delta}(G_{i}u^{\delta}-f_{i}))\).
## 4. Verification of the inf-sup conditions
By constructing Fortin interpolators \(\Pi_{i}\) for the MINRES examples from Sect. 2, we verify the inf-sup conditions \(\gamma_{i}>0\), which, for finite element spaces of given
fixed orders, will hold uniformly over uniformly shape regular, possibly locally refined partitions.
If \((G_{i}X^{\delta})\big{(}(\operatorname{Id}-\Pi_{i}^{\delta})Y_{i}\big{)}=0\), then this obviously also holds when \(X^{\delta}\) is replaced by a subspace. Consequently, for Examples 2.2, 2.3, and 2.4, it suffices to consider Case (ii).
### Inf-sup conditions for Example 2.2(ii) (2nd order formulation)
We assume that \(\Omega\subset\mathbb{R}^{d}\) is a polytope, and let \(\mathcal{T}^{\delta}\) be a conforming, shape regular partition of \(\Omega\) into (closed) \(d\)-simplices. With \(\mathcal{F}(\mathcal{T}^{\delta})\) we denote the set of (closed) facets of \(K\in\mathcal{T}^{\delta}\). We assume that \(\Gamma_{D}\) is the union of some \(e\in\mathcal{F}(\mathcal{T}^{\delta})\). For \(K\in\mathcal{T}^{\delta}\), we set the patches \(\omega_{K,0}(\mathcal{T}^{\delta}):=K\), and \(\omega_{K,i+1}(\mathcal{T}^{\delta}):=\cup\{K^{\prime}\in\mathcal{T}^{\delta} \colon K^{\prime}\cap\omega_{K,i}(\mathcal{T}^{\delta})\neq\varnothing\}\). Let \(h_{\delta}\) be the piecewise constant function on \(\Omega\) defined by \(h_{\delta}|_{K}:=|K|^{1/d}\). Focussing on the case of having inhomogeneous Dirichlet boundary conditions on \(\Gamma_{D}\), i.e., Ex. 2.2(ii), we take
\[X^{\delta}=\mathcal{S}_{p}^{0}(\mathcal{T}^{\delta}):=\mathcal{S}_{p}^{-1}( \mathcal{T}^{\delta})\cap C(\Omega), \tag{4.1}\]
with \(\mathcal{S}_{p}^{-1}(\mathcal{T}^{\delta})\) being the space of \(f\colon\Omega\to\mathbb{R}\) such that for \(K\in\mathcal{T}^{\delta}\), \(f|_{K}\in\mathcal{P}_{p}(K)\), being the space of polynomials of maximal degree \(p\).
We take \(A=\operatorname{Id}\), although the arguments given below apply equally when \(A\) is piecewise constant w.r.t. \(\mathcal{T}^{\delta}\). For convenience, we take \(B=0\), but the case of \(B\) being a PDO of first order with piecewise constant coefficients w.r.t. \(\mathcal{T}^{\delta}\) poses no additional difficulties.3
Footnote 3: It suffices to take \(Y_{1}^{\delta}:=\mathcal{S}_{p+d+1}^{0}(\mathcal{T}^{\delta})\cap H^{1}_{0, \Gamma_{D}}(\Omega)\)
Considering the original 'impractical' MINRES discretisation (2.1), as discussed before we write the term \(\|\gamma_{D}w-h_{D}\|_{H^{\frac{1}{2}}(\Gamma_{D})}^{2}\) as \(\|\gamma_{D}w-h_{D}\|_{\widetilde{H}^{-\frac{1}{2}}(\Gamma_{D})^{\prime}}^{2}\). For constructing a MINRES discretisation of type (3.3) that is quasi-optimal, it therefore suffices to select finite dimensional subspaces
\[Y_{1}^{\delta}\subset Y_{1}=H^{1}_{0,\Gamma_{D}}(\Omega),\quad Y_{2}^{\delta} \subset Y_{2}=\widetilde{H}^{-\frac{1}{2}}(\Gamma_{D})\]
that allow for the construction of Fortin interpolators \(\Pi_{1}^{\delta}\in\mathcal{L}(H^{1}_{0,\Gamma_{D}}(\Omega),Y_{1}^{\delta})\), and \(\Pi_{2}^{\delta}\in\mathcal{L}(\widetilde{H}^{-\frac{1}{2}}(\Gamma_{D}),Y_{2} ^{\delta})\) with
\[\int_{\Omega}\nabla w\cdot\nabla(\operatorname{Id}-\Pi_{1}^{ \delta})v\,dx=0 (w\in X^{\delta},\,v\in H^{1}_{0,\Gamma_{D}}(\Omega)), \tag{4.3}\] \[\int_{\Gamma_{D}}w(\operatorname{Id}-\Pi_{2}^{\delta})v\,ds=0 (w\in X^{\delta},\,v\in\widetilde{H}^{-\frac{1}{2}}(\Gamma_{D})). \tag{4.2}\]
Starting with (4.2), we rewrite it as
\[0=\sum_{K\in\mathcal{T}^{\delta}}\big{\{}\int_{K}-\Delta w(\operatorname{Id}- \Pi_{1}^{\delta})v\,dx+\int_{\partial K}\frac{\partial w}{\partial\widetilde{ H}}(\operatorname{Id}-\Pi_{1}^{\delta})v\,ds\big{\}}\quad(w\in X^{\delta},\,v\in H^{1}_{0, \Gamma_{D}}(\Omega)),\]
and select
\[Y_{1}^{\delta}:=\mathcal{S}_{p+d-1}^{0}(\mathcal{T}^{\delta})\cap H^{1}_{0, \Gamma_{D}}(\Omega). \tag{4.4}\]
It suffices to construct \(\Pi_{1}^{\delta}\in\mathcal{L}(H^{1}_{0,\Gamma_{D}}(\Omega),Y_{1}^{\delta})\) such that both
\[\operatorname{ran}(\operatorname{Id}-\Pi_{1}^{\delta})|_{e}\perp_{L_{2}(e)} \mathcal{P}_{p-1}(e)\quad(e\in\mathcal{F}(\mathcal{T}^{\delta})), \tag{4.5}\]
and, when \(p>1\),
\[\operatorname{ran}(\operatorname{Id}-\check{\Pi}^{\delta}_{1})|_{K}\perp_{L_{2}(K)} \mathcal{P}_{p-2}(K)\quad(K\in\mathcal{T}^{\delta}). \tag{4.6}\]
Let \(\check{\Pi}^{\delta}_{1}\colon H^{1}_{0,\Gamma_{D}}(\Omega)\to S^{0}_{1}( \mathcal{T}^{\delta})\cap H^{1}_{0,\Gamma_{D}}(\Omega)\) denote the familiar Scott-Zhang interpolator ([10]). It satisfies
\[\|h^{-1}_{\delta}(\operatorname{Id}-\check{\Pi}^{\delta}_{1})v\|_{L_{2}(K)}+ |\check{\Pi}^{\delta}_{1}v|_{H^{1}(K)}\lesssim|v|_{H^{1}(\omega_{K,1}(\mathcal{ T}^{\delta}))}\quad(v\in H^{1}_{0,\Gamma_{D}}(\Omega)).\]
In two steps we correct \(\check{\Pi}^{\delta}_{1}\) to a \(\check{\Pi}^{\delta}_{1}\in\mathcal{L}(H^{1}_{0,\Gamma_{D}}(\Omega),\check{ \gamma}^{\delta}_{1})\) that satisfies (4.5)-(4.6).
On a facet \(\hat{\varrho}\) of a reference \(d\)-simplex \(\hat{K}\), let \(b_{\hat{\varrho}}\) denote the \(d\)-fold product of its barycentric coordinates. From \(\int_{\hat{\varrho}}b_{\hat{\varepsilon}}|q|^{2}\,ds\eqsim\int_{\hat{\varrho} }|q|^{2}\,ds\,(q\in\mathcal{P}_{p-1}(\hat{\varrho}))\), and \(b_{\hat{\varrho}}\mathcal{P}_{p-1}(\hat{\varrho})=\mathcal{P}_{p+d-1}(\hat{ \varrho})\cap H^{1}_{0}(\hat{\varrho})\), one infers that there exist bases \(\{\hat{\psi}_{i}\}_{i}\) and \(\{\hat{\ell}_{i}\}_{i}\) of \(\mathcal{P}_{p+d-1}(\hat{\varrho})\cap H^{1}_{0}(\hat{\varrho})\) and \(\mathcal{P}_{p-1}(\hat{\varrho})\) that are \(L_{2}(\hat{\varrho})\)-biorthogonal. Let \(\hat{\psi}_{i}\) be an extension of \(\hat{\psi}_{i}\) to a function in \(\mathcal{P}_{p+d-1}(\hat{K})\cap H^{1}_{0,\partial\hat{K}\setminus\operatorname {int}(\hat{\varrho})}(\hat{K})\).
By using affine bijections between \(\hat{K}\) and \(K\in\mathcal{T}^{\delta}\), for each \(e\in\mathcal{F}(\mathcal{T}^{\delta})\) we lift \(\{\hat{\ell}_{i}\}_{i}\) to a collection \(\{\ell_{e,i}\}_{i}\) that spans \(\mathcal{P}_{p-1}(e)\), and lift \(\{\hat{\psi}_{i}\}_{i}\) to a collection \(\{\psi_{e,i}\}_{i}\subset Y^{\delta}_{1}\) of functions supported on the union of the two (or one) simplices in \(\mathcal{T}^{\delta}\) of which \(e\) is a facet. We set
\[\check{\Pi}^{\delta}_{1}v:=\hat{\Pi}^{\delta}_{1}v+\sum_{e\in\mathcal{F}( \mathcal{T}^{\delta})}\sum_{i}\frac{\langle v-\check{\Pi}^{\delta}_{1}v,\ell _{e,i}\rangle_{L_{2}(e)}}{\langle\psi_{e,i},\ell_{e,i}\rangle_{L_{2}(e)}}\psi_ {e,i}\quad(v\in H^{1}_{0,\Gamma_{D}}(\Omega)).\]
From \(\langle\psi_{e,i},\ell_{e,j}\rangle_{L_{2}(e)}=0\) when \(i\neq j\), it follows that
\[\operatorname{ran}(\operatorname{Id}-\check{\Pi}^{\delta}_{1})|_{e}\perp_{L_{ 2}(e)}\mathcal{P}_{p-1}(e)\qquad(e\in\mathcal{F}(\mathcal{T}^{\delta})). \tag{4.7}\]
Standard homogeneity arguments and the use of the trace inequality show that
\[\|h^{-1}_{\delta}(\operatorname{Id}-\check{\Pi}^{\delta}_{1})v\|_{L_{2}(K)}+ |\check{\Pi}^{\delta}_{1}v|_{H^{1}(K)}\lesssim|v|_{H^{1}(\omega_{K,2}(\mathcal{ T}^{\delta}))}\quad(v\in H^{1}_{0,\Gamma_{D}}(\Omega)).\]
For the case that \(p=1\), we take \(\Pi^{\delta}_{1}=\check{\Pi}^{\delta}_{1}\). Otherwise we proceed as follows. Let \(b_{\hat{K}}\) denote the \((d+1)\)-fold product of the barycentric coordinates of \(\hat{K}\). From \(\int_{\hat{K}}b_{\hat{K}}|q|^{2}\,dz\eqsim\int_{\hat{K}}|q|^{2}\,dx\) (\(q\in\mathcal{P}_{p-2}(\hat{K})\)), and \(b_{\hat{K}}\mathcal{P}_{p}(\hat{K})=\mathcal{P}_{p+d-1}(\hat{K})\cap H^{1}_{0}( \hat{K})\), one infers that there exist bases \(\{\hat{\phi}_{k}\}_{k}\) and \(\{\hat{q}_{k}\}_{k}\) of \(\mathcal{P}_{p+d-1}(\hat{K})\cap H^{1}_{0}(\hat{K})\) and \(\mathcal{P}_{p-2}(\hat{K})\) that are \(L_{2}(\hat{K})\)-biorthogonal.
Again using the affine bijections between \(\hat{K}\) and \(K\in\mathcal{T}^{\delta}\), for each \(K\in\mathcal{T}^{\delta}\) we lift \(\{\hat{\phi}_{k}\}_{k}\) and \(\{\hat{q}_{k}\}_{k}\) to collections \(\{\phi_{K,k}\}_{k}\) and \(\{q_{K,k}\}_{k}\) that span \(\mathcal{P}_{p+d-1}(K)\cap H^{1}_{0}(K)\) and \(\mathcal{P}_{p-2}(K)\), respectively. We set
\[\Pi^{\delta}_{1}v:=\check{\Pi}^{\delta}_{1}v+\sum_{K\in\mathcal{T}^{\delta}} \sum_{k}\frac{\langle v-\check{\Pi}^{\delta}_{1}v,q_{K,k}\rangle_{L_{2}(K)}}{ \langle\phi_{K,k},q_{K,k}\rangle_{L_{2}(K)}}\phi_{K,k}\quad(v\in H^{1}_{0, \Gamma_{D}}(\Omega)).\]
Thanks to (4.7), it satisfies (4.5), and from \(\langle\phi_{K,k},q_{K,k^{\prime}}\rangle_{L_{2}(K)}=0\) when \(k\neq k^{\prime}\), one infers that it satisfies (4.6). From
\[\|h^{-1}_{\delta}(\operatorname{Id}-\Pi^{\delta}_{1})v\|_{L_{2}(K)}+|\Pi^{\delta} _{1}v|_{H^{1}(K)}\lesssim|v|_{H^{1}(\omega_{K,2}(\mathcal{T}^{\delta}))}\quad(v \in H^{1}_{0,\Gamma_{D}}(\Omega)),\]
we conclude the following result.
**Proposition 4.1**.: _For \(X^{\delta}\) and \(Y^{\delta}_{1}\) from (4.1) and (4.4), it holds that \(\Pi^{\delta}_{1}\in\mathcal{L}(H^{1}_{0,\Gamma_{D}}(\Omega),Y^{\delta}_{1})\),4 and (4.2) is valid._
Footnote 4: _Uniformly_ in all \(\mathcal{T}^{\delta}\) that satisfy a uniform shape regularity condition.
In view of a posteriori error estimation, we consider the data-oscillation term associated to \(\Pi^{\delta}_{1}\) (actually a slightly modified operator). We show that it is of higher order than \(\inf_{w\in X^{\delta}}\|u-w\|_{H^{1}(\Omega)}\) (cf. Remark 3.8) when we take the larger space \(Y^{\delta}_{1}=\mathcal{S}^{0}_{p+d}(\mathcal{T}^{\delta})\cap H^{1}_{0, \Gamma_{D}}(\Omega)\).
_Remark 4.2_ (data-oscillation).: With \(\check{P}^{\delta}_{1}:=v\mapsto\sum_{e\in\mathcal{F}(\mathcal{T}^{\delta})} \sum_{i}\frac{\langle v,\ell_{e,i}\rangle_{L_{2}(e)}}{\langle\psi_{e,i},\ell_{ e,i}\rangle_{L_{2}(e)}}\psi_{e,i}\), and \(P^{\delta}_{1}:=v\mapsto\sum_{K\in\mathcal{T}^{\delta}}\sum_{k}\frac{\langle v,\ell_{K}\rangle_{L_{2}(K)}}{\langle\phi_{K,k},\ell_{K,k}\rangle_{L_{2}(K)}} \phi_{K,k}\), it holds that \(\check{\Pi}^{\delta}_{1}=\check{\Pi}^{\delta}_{1}+\check{P}^{\delta}_{1}( \operatorname{Id}-\hat{\Pi}^{\delta}_{1})\), and \(\Pi^{\delta}_{1}=\check{\Pi}^{\delta}_{1}+P^{\delta}_{1}(\operatorname{Id}- \check{\Pi}^{\delta}_{1})\), so that \(\operatorname{Id}-\Pi^{\delta}_{1}=(\operatorname{Id}-P^{\delta}_{1})( \operatorname{Id}-\check{P}^{\delta}_{1})(\operatorname{Id}-\hat{\Pi}^{ \delta}_{1})\), and so
\[\operatorname{Id}-\Pi^{\delta^{\prime}}_{1}=(\operatorname{Id}-\hat{\Pi}^{ \delta^{\prime}}_{1})(\operatorname{Id}-\check{P}^{\delta^{\prime}}_{1})( \operatorname{Id}-P^{\delta^{\prime}}_{1}).\]
We now replace the Scott-Zhang interpolator \(\hat{\Pi}^{\delta}_{1}\) by the interpolator onto \(S^{0}_{1}(\mathcal{T}^{\delta})\cap H^{1}_{0,\Gamma_{D}}(\Omega)\) from [13, DST21] (or the adjoint of \(P_{\mathcal{T}}\) from [11, Thm. 3.2]), which does not affect the validity of Proposition 4.1. This new \(\check{\Pi}^{\delta}_{1}\) additionally satisfies \(\|(\operatorname{Id}-\hat{\Pi}^{\delta^{\prime}}_{1})f_{1}\|_{Y^{\prime}_{1} }\lesssim\|h_{\delta}f_{1}\|_{L_{2}(\Omega)}\) (\(f_{1}\in L_{2}(\Omega)\)). By using this estimate together with the stability and locality of \(\check{P}^{\delta}_{1}\) and \(P^{\delta}_{1}\), and the fact that \({P^{\delta^{\prime}}_{1}}^{\prime}\) reproduces \(\mathcal{S}^{-1}_{p-1}(\mathcal{T}^{\delta})\) (instead of \(\mathcal{S}^{-1}_{p-2}(\mathcal{T}^{\delta})\) for \(Y^{\delta}_{1}=\mathcal{S}^{0}_{p+d-1}(\mathcal{T}^{\delta})\cap H^{1}_{0, \Gamma_{D}}(\Omega)\)), one infers that
\[\|(\operatorname{Id}-\Pi^{\delta^{\prime}}_{1})f_{1}\|_{H^{1}_{0,\Gamma_{D}}( \Omega)^{\prime}}\lesssim\sqrt{\sum_{K\in\mathcal{T}^{\delta}}(h_{\delta}|_{K} )^{2(p+1)}|f_{1}|^{2}_{H^{p-1}(K)}}\quad(f_{1}\in H^{p-1}(\Omega)).\qed\]
To construct the Fortin interpolator \(\Pi^{\delta}_{2}\), with \(\mathcal{F}^{\delta}_{\Gamma_{D}}:=\{e\in\mathcal{F}(\mathcal{T}^{\delta})\colon e \subset\Gamma_{D}\}\) we take
\[Y^{\delta}_{2}:=\mathcal{S}^{-1}_{p}(\mathcal{F}^{\delta}_{\Gamma_{D}}). \tag{4.8}\]
With \(\{\phi^{\delta}_{i}\}\) being the nodal basis of \(\mathcal{S}^{0}_{p}(\mathcal{F}^{\delta}_{\Gamma_{D}})\supset\operatorname{ ran}\Gamma_{D}|_{X^{\delta}}\), it is known that a projector \(P^{\delta}_{2}\) of Scott-Zhang type exists of the form \(P^{\delta}_{2}v=\sum_{i}\langle v,\psi^{\delta}_{i}\rangle_{L_{2}(\Gamma_{D})} \phi^{\delta}_{i}\), where \(\{\psi^{\delta}_{i}\}\subset Y^{\delta}_{2}\) is biorthogonal to \(\{\phi^{\delta}_{i}\}\), \(P^{\delta}_{2}\) is bounded in \(L_{2}(\Gamma_{D})\) and in \(H^{1}(\Gamma_{D})\), and
\[\|(\operatorname{Id}-P^{\delta}_{2})f_{2}\|_{H^{\frac{1}{2}}(\Gamma_{D})} \lesssim\sqrt{\sum_{e\in\mathcal{F}^{\delta}_{\Gamma_{D}}}(h_{\delta}|_{e})^{2p+ 1}|f_{2}|^{2}_{H^{p+1}(e)}}\quad(f_{2}\in H^{p+1}(\Omega)). \tag{4.9}\]
Since \(\Pi^{\delta}_{2}:={P^{\delta^{\prime}}_{2}}^{\prime}\) maps into \(Y^{\delta}_{2}\), and \(P^{\delta}_{2}\) reproduces \(\mathcal{S}^{0}_{p}(\mathcal{F}^{\delta}_{\Gamma_{D}})\), we conclude the following result.
**Proposition 4.3**.: _For \(X^{\delta}\) and \(Y^{\delta}_{2}\) from (4.1) and (4.8), it holds that \(\Pi^{\delta}_{2}\in\mathcal{L}(\widehat{H}^{-\frac{1}{2}}(\Gamma_{D}),Y^{\delta }_{2})\),4 and (4.3) is valid._
Footnote 4: _Uniformly_ in all \(\mathcal{T}^{\delta}\) that satisfy a uniform shape regularity condition.
_Remark 4.4_ (data-oscillation).: Equation (4.9) shows that the data-oscillation term corresponding to \(\Pi^{\delta}_{2}\) is of higher order than the best approximation error.
### Inf-sup conditions for Example 2.3(ii) (mild formulation)
We take
\[X^{\delta}:=RT_{p-1}(\mathcal{T}^{\delta})\times\mathcal{S}^{0}_{p}(\mathcal{T}^{ \delta}), \tag{4.10}\]
where \(RT_{p-1}(\mathcal{T}^{\delta})=RT_{p-1}^{-1}(\mathcal{T}^{\delta})\cap H(\operatorname {div};\Omega)\) and \(RT_{p-1}^{-1}(\mathcal{T}^{\delta})=\{\vec{q}\in L_{2}(\Omega)^{d}\colon\vec{q} |_{K}\in\mathcal{P}_{p-1}(K)^{d}+\vec{x}\mathcal{P}_{p-1}(K)\}\). The term \(\|\gamma_{D}w-h_{D}\|_{H^{\frac{1}{2}}(\Gamma_{D})}^{2}=\|\gamma_{D}w-h_{D}\|_ {\bar{H}^{-\frac{1}{2}}(\Gamma_{D})}^{2}\), can be handled as in Example 2.2. The dual norm can be discretized by replacing \(\bar{H}^{-\frac{1}{2}}(\Gamma_{D})\) by \(\mathcal{S}^{-1}_{p}(\mathcal{F}^{\delta}_{\Gamma_{D}})\).
Considering the term \(\|\gamma_{N}\vec{q}-h_{N}\|_{H^{-\frac{1}{2}}(\Gamma_{N})}^{2}\), using that \(\operatorname{ran}\gamma_{N}|_{RT_{p-1}(\mathcal{T}^{\delta})}=\mathcal{S}^{- 1}_{p}(\mathcal{F}^{\delta}_{\Gamma_{N}})\), one needs to select a finite dimensional subspace \(Y_{1}^{\delta}\subset Y_{1}=H^{\frac{1}{2}}_{00}(\Gamma_{N})\) that allows for the construction of a Fortin interpolator \(\Pi_{1}^{\delta}\in\mathcal{L}(H^{\frac{1}{2}}_{00}(\Gamma_{N}),Y_{1}^{\delta})\) with
\[\int_{\Gamma_{N}}w(\operatorname{Id}-\Pi_{1}^{\delta})v\,ds=0\quad(w\in \mathcal{S}^{-1}_{p-1}(\mathcal{F}^{\delta}_{\Gamma_{N}}),\,v\in H^{\frac{1}{ 2}}_{00}(\Gamma_{N})). \tag{4.11}\]
We take
\[Y_{1}^{\delta}:=\mathcal{S}^{0}_{p+d-1}(\mathcal{F}^{\delta}_{\Gamma_{N}})\cap H ^{1}_{0}(\Gamma_{N}), \tag{4.12}\]
and follow a somewhat simplified version of the construction of \(\Pi_{1}^{\delta}\) in Sect. 4.1. Let \(\hat{\Pi}_{1}^{\delta}\) be a modified Scott-Zhang projector onto \(\mathcal{S}^{0}_{1}(\mathcal{F}^{\delta}_{\Gamma_{N}})\cap H^{1}_{0}(\Gamma_{ N})\) from [1]. For \(e\in\mathcal{F}^{\delta}_{\Gamma_{N}}\), we can find \(\{\phi_{e,k}\}\) and \(\{q_{e,k}\}\), which up to a scaling are \(L_{2}(e)\)-biorthogal, and that span \(\mathcal{P}_{d+p-1}(e)\cap H^{1}_{0}(e)\) and \(\mathcal{P}_{p-1}(e)\), respectively, such that for \(\Pi_{1}^{\delta}\) defined by
\[\Pi_{1}^{\delta}v:=\hat{\Pi}_{1}^{\delta}v+\sum_{e\in\mathcal{F}^{\delta}_{ \Gamma_{N}}}\sum_{k}\frac{\langle v-\Pi_{1}^{\delta}v,q_{e,k}\rangle_{L_{2}(e )}}{\langle\phi_{e,k},q_{e,k}\rangle_{L_{2}(e)}}\phi_{e,k},\]
the following result is valid.
**Proposition 4.5**.: _For \(X^{\delta}\) and \(Y_{1}^{\delta}\) from (4.10) and (4.12), it holds that \(\Pi_{1}^{\delta}\in\mathcal{L}(H^{\frac{1}{2}}_{00}(\Gamma_{N}),Y_{1}^{\delta})\),4 and (4.11) is valid._
_Remark 4.6_ (data-oscillation).: It holds that
\[\|(\operatorname{Id}-\Pi_{1}^{\delta^{\prime}})f_{1}\|_{H^{-\frac{1}{2}}( \Gamma_{N})}\lesssim\sqrt{\sum_{e\in\mathcal{F}^{\delta}_{\Gamma_{N}}}(h_{ \delta}|_{K})^{2p+1}|f_{1}|_{H^{p}(K)}^{2}}\quad(f_{1}\in H^{p}(\Omega)),\]
so the data-oscillation term corresponding to \(\Pi_{1}^{\delta}\) is of higher order than the best approximation error.
_Remark 4.7_ (Avoidance of the condition \(g\in L_{2}(\Omega)\)).: Consider the mild formulation with homogeneous boundary data \(h_{D}=0\) and \(h_{N}=0\) (i.e., Example 2.3(i)), so that \(G(\vec{q},w)=(\vec{q}-Au,Bw-\operatorname{div}\vec{q})\). As noticed before, a disadvantage of this formulation is that it requires a forcing term \(g\in L_{2}(\Omega)\). As shown in [11, 12], assuming \(B=0\) this condition can be circumvented by replacing a general \(g\in H^{1}_{0,\Gamma_{D}}(\Omega)^{\prime}\) by a finite element approximation, resulting in a MINRES method that is quasi-optimal in the weaker \(L_{2}(\Omega)^{d}\times H^{1}(\Omega)\)-norm. The analysis in [12] was restricted to the lowest order case, and below we generalise it to finite element approximation of general degree.
For
\[X^{\delta}:=\big{(}RT_{p-1}(\mathcal{T}^{\delta})\cap H_{0,\Gamma_{N}}(\mathrm{div}; \Omega)\big{)}\times\big{(}\mathcal{S}^{0}_{p}(\mathcal{T}^{\delta})\cap H^{1}_ {0,\Gamma_{D}}(\Omega)\big{)},\]
and \(\tilde{Q}^{\delta}_{p-1}\) being the \(H^{1}_{0,\Gamma_{D}}(\Omega)^{\prime}\)-bounded, efficiently applicable projector onto \(\mathcal{S}^{-1}_{p-1}(\mathcal{T}^{\delta})\) defined as the adjoint of the projector "\(P_{\mathcal{T}}\)" from [22, Thm. 5.1], or, alternatively for \(p=1\), the projector "\(Q_{h}\)" from [19, Prop. 8], let
\[(\vec{p}^{\delta},u^{\delta}):=\operatorname*{argmin}_{(\vec{q},w)\in X^{ \delta}}\frac{1}{2}\|G(\vec{q},w)-(0,\tilde{Q}^{\delta}_{p-1}g)\|^{2}_{L_{2}( \Omega)^{d}\times L_{2}(\Omega)}. \tag{4.13}\]
Let \(P^{\delta}_{p-1}\in\mathcal{L}\big{(}H_{0,\Gamma_{N}}(\mathrm{div};\Omega),H_ {0,\Gamma_{N}}(\mathrm{div};\Omega)\big{)}\) be the projector onto \(RT_{p-1}(\mathcal{T}^{\delta})\cap H_{0,\Gamma_{N}}(\mathrm{div};\Omega)\) constructed in [10]. It has a commuting diagram property (being the essence behind this approach), and consequently for \(\vec{q}\in H_{0,\Gamma_{N}}(\mathrm{div};\Omega)\) with \(\mathrm{div}\,\vec{q}\in\mathcal{S}^{-1}_{p-1}(\mathcal{T}^{\delta})\), it satisfies
\[\|\vec{q}-P^{\delta}_{p-1}\vec{q}\|_{H(\mathrm{div};\Omega)}\lesssim\inf_{ \vec{z}\in RT^{-1}_{p-1}(\mathcal{T}^{\delta})}\|\vec{q}-\vec{z}\|_{L_{2}( \Omega)}.\]
Let \((\vec{p},u)\) denote the solution of the mild-weak system \(\vec{p}-A\nabla u=0\), \(\int_{\Omega}\vec{p}\cdot\nabla v\,dx=g(v)\) (\(v\in H^{1}_{0,\Gamma_{D}}(\Omega)\)), and let \((\vec{p},u)\) denotes this solution with \(g\) replaced by \(\tilde{Q}^{\delta}_{p-1}g\). Notice that \(G(\vec{p},\underline{u})=(0,\tilde{Q}^{\delta}_{p-1}g)\) and so \(\mathrm{div}\,\vec{p}\in\mathcal{S}^{-1}_{p-1}(\mathcal{T}^{\delta})\). From \(g\mapsto(\vec{p},u)\in\mathcal{L}\big{(}H^{1}_{0,\Gamma_{D}}(\Omega)^{\prime},L_{2}(\Omega)^{d}\times H^{1}_{0,\Gamma_{D}}(\Omega)\big{)}\), and the quasi-optimality of the MINRES discretization (4.13) in \(H(\mathrm{div};\Omega)\times H^{1}(\Omega)\)-norm, we infer that
\[\|\vec{p}-\vec{p}^{\delta}\|_{L_{2}(\Omega)^{d}}+\|u-u^{\delta}\| _{H^{1}(\Omega)}\] \[\lesssim\|g-\tilde{Q}^{\delta}_{p-1}g\|_{H^{1}_{0,\Gamma_{D}}( \Omega)^{\prime}}+\|\vec{p}-\vec{p}^{\delta}\|_{H(\mathrm{div};\Omega)}+\| \underline{u}-u^{\delta}\|_{H^{1}(\Omega)}\] \[\lesssim\|g-\tilde{Q}^{\delta}_{p-1}g\|_{H^{1}_{0,\Gamma_{D}}( \Omega)^{\prime}}+\inf_{(\vec{z},w)\in X^{\delta}}\|\vec{p}-\vec{z}\|_{H( \mathrm{div};\Omega)}+\|\underline{u}-w\|_{H^{1}(\Omega)}\] \[\leq\|g-\tilde{Q}^{\delta}_{p-1}g\|_{H^{1}_{0,\Gamma_{D}}( \Omega)^{\prime}}+\|\vec{p}-P^{\delta}_{p-1}\vec{p}\|_{H(\mathrm{div};\Omega) }+\inf_{\begin{subarray}{c}w\in\mathcal{S}^{0}_{p}(\mathcal{T}^{\delta})\cap H ^{1}_{0,\Gamma_{D}}(\Omega)\\ \end{subarray}}\|\underline{u}-w\|_{H^{1}(\Omega)}\] \[\lesssim\|g-\tilde{Q}^{\delta}_{p-1}g\|_{H^{1}_{0,\Gamma_{D}}( \Omega)^{\prime}}+\inf_{(\vec{z},w)\in X^{\delta}}\|\vec{p}-\vec{z}\|_{L_{2}( \Omega)^{d}}+\|\underline{u}-w\|_{H^{1}(\Omega)}\] \[\lesssim\inf_{\vec{z}\in ST_{p-1}(\mathcal{T}^{\delta})\cap H_{0, \Gamma_{N}}(\mathrm{div};\Omega)}\|g+\mathrm{div}\,\vec{z}\|_{H^{1}_{0,\Gamma_ {D}}(\Omega)^{\prime}}\] \[\lesssim\inf_{(\vec{z},w)\in X^{\delta}}\|\vec{p}-\vec{z}\|_{L_{2} (\Omega)^{d}}+\|u-w\|_{H^{1}(\Omega)}\]
where for the last inequality we have used that for \(\vec{z}\in RT_{p-1}(\mathcal{T}^{\delta})\cap H_{0,\Gamma_{N}}(\mathrm{div};\Omega)\) and \(v\in H^{1}_{0,\Gamma_{D}}(\Omega)\), \(|g(v)+\int_{\Omega}\mathrm{div}\,\vec{z}\,v\,dx|=|\int_{\Omega}(\vec{p}-\vec {z})\cdot\nabla v\,dx|\). We conclude quasi-optimality of \((\vec{p}^{\delta},u^{\delta})\in X^{\delta}\) w.r.t. the \(L_{2}(\Omega)^{d}\times H^{1}(\Omega)^{d}\)-norm.
### Inf-sup conditions for Example 2.4(ii) (mild-weak formulation)
We take
\[X^{\delta}:=\mathcal{S}^{-1}_{p-1}(\mathcal{T}^{\delta})^{d}\times\mathcal{S}^{ 0}_{p}(\mathcal{T}^{\delta}).\]
For simplicity we assume that \(A=\mathrm{Id}\) and \(B=0\), so that \(G_{2}(\vec{q},w)=G_{2}(\vec{q})\).
Again the term \(\|\gamma_{D}w-h_{D}\|_{H^{\frac{1}{2}}(\Gamma_{D})}^{2}=\|\gamma_{D}w-h_{D}\|_{ \widetilde{H}^{-\frac{1}{2}}(\Gamma_{D})^{\prime}}^{2}\) can be handled as in Example 2.2. The dual norm can be discretized by replacing \(\widetilde{H}^{-\frac{1}{2}}(\Gamma_{D})\) by \(\mathcal{S}_{p}^{-1}(\mathcal{F}_{\Gamma_{D}}^{\delta})\).
From \(\int_{\Omega}\vec{q}\cdot\nabla v\,dx=\sum_{K\in\mathcal{T}^{\delta}}\{\int_{K }-\operatorname{div}\vec{q}\,v\,dx+\int_{\partial K}\vec{q}\cdot\vec{n}\,v\,ds\}\) where, when \(p\geq 2\), for \(K\in\mathcal{T}^{\delta}\), \(\operatorname{div}\vec{q}\in\mathcal{P}_{p-2}(K)\), and for \(e\in\mathcal{F}(\mathcal{T}^{\delta})\), \(\vec{q}\cdot\vec{n}\in\mathcal{P}_{p-1}(e)\), we conclude that the term \(\|G_{2}(\vec{q})-f_{2}\|_{H^{1}_{0,\Gamma_{D}}(\Omega)^{\prime}}\) can be handled as in Example 2.2. The dual norm can be discretized by replacing \(H^{1}_{0,\Gamma_{D}}(\Omega)\) by \(\mathcal{S}_{p+d-1}^{0}(\mathcal{T}^{\delta})\cap H^{1}_{0,\Gamma_{D}}(\Omega)\).
_Remark 4.8_ (Approach from [1]).: Consider the mild-weak formulation with homogeneous essential boundary data \(h_{D}=0\) (i.e., Example 2.4(i)), as well as \(h_{N}=0\), and, for simplicity, \(A=\operatorname{Id}\) and \(B=0\). Our approach was to determine \(Y^{\delta}\subset H^{1}_{0,\Gamma_{D}}(\Omega)\) that allows for the construction of \(\Pi^{\delta}\in\mathcal{L}(H^{1}_{0,\Gamma_{D}}(\Omega),Y^{\delta})\) with \(\int_{\Omega}\vec{q}\cdot\nabla(\operatorname{Id}-\Pi^{\delta})v\,dx=0\) (\(\vec{q}\in\mathcal{S}_{p-1}^{-1}(\mathcal{T}^{\delta})^{d}\), \(v\in H^{1}_{0,\Gamma_{D}}(\Omega)\)). Consequently, we could replace the term \(\|v\mapsto\int_{\Omega}\vec{q}\cdot\nabla v\,dx-g(v)\|_{H^{1}_{0,\Gamma_{D}}( \Omega)^{\prime}}^{2}\) in the least-squares minimization by the computable term \(\|v\mapsto\int_{\Omega}\vec{q}\cdot\nabla v\,dx-g(v)\|_{Y^{\delta}}^{2}\), without compromising quasi-optimality of the resulting least-squares solution \((\vec{p}^{\delta},u^{\delta})\in X^{\delta}\).
Under the additional conditions that \(g\in L_{2}(\Omega)\), and that the finite element space \(X^{\delta}\) w.r.t. \(\mathcal{T}^{\delta}\) is contained in \(H^{1}_{0,\Gamma_{D}}(\Omega)\times H_{0,\Gamma_{N}}(\operatorname{div};\Omega)\), for a finite element space \(Y^{\delta}\) w.r.t. \(\mathcal{T}^{\delta}\) for which there exists a mapping \(\Pi^{\delta}\in\mathcal{L}(H^{1}_{0,\Gamma_{D}}(\Omega),Y^{\delta})\) with \(\|h_{\delta}(\operatorname{Id}-\Pi^{\delta})\|_{\mathcal{L}(H^{1}_{0,\Gamma_{ D}}(\Omega),L_{2}(\Omega))}\lesssim 1\), the approach from [1] is to compute
\[\operatorname*{argmin}_{(\vec{q},w)\in X^{\delta}}\frac{1}{2}\big{(}\|\vec{q}- \nabla w\|_{L_{2}(\Omega)^{d}}^{2}+\|v\mapsto\int_{\Omega}\vec{q}\cdot\nabla v -gv\,dx\|_{Y^{\delta}}^{2}+\|h_{\delta}(\operatorname{div}\vec{q}+g)\|_{L_{2}( \Omega)}^{2}\big{)}.\]
So compared to our least-squares functional there is the additional term \(\|h_{\delta}(\operatorname{div}\vec{q}+g)\|_{L_{2}(\Omega)}^{2}\), whereas on the other hand the selection of \(Y^{\delta}\) is less demanding. Following [1], it can be shown that the resulting least squares solution denoted by \((\vec{p}^{\delta},u^{\delta})\) satisfies
\[\|\vec{p}-\vec{p}^{\delta}\|_{L_{2}(\Omega)^{d}}+ \|u-u^{\delta}\|_{H^{1}(\Omega)}\] \[\lesssim\inf_{(\vec{q},w)\in X^{\delta}}\|\vec{p}-\vec{q}\|_{L_{ 2}(\Omega)^{d}}+\|u-w\|_{H^{1}(\Omega)}+\|h_{\delta}\operatorname{div}(\vec{p}- \vec{q})\|_{L_{2}(\Omega)}.\]
This estimate does not imply quasi-optimality, but under usual regularity conditions w.r.t. Hilbertian Sobolev spaces optimal rates can be demonstrated. The assumption \(g\in L_{2}(\Omega)\) can be weakened by replacing \(g\) by an approximation from a finite element space w.r.t. \(\mathcal{T}^{\delta}\).
### Inf-sup condition for Example 2.5 (ultra-weak formulation)
We restrict our analysis to the case that \(|\Gamma_{D}|>0\), \(A=\operatorname{Id}\), and \(B=0\). Then for \((\vec{q},w)\in X=L_{2}(\Omega)^{d}\times L_{2}(\Omega)\), and \((\vec{z},v)\in Y=H_{0,\Gamma_{N}}(\operatorname{div};\Omega)\times H^{1}_{0, \Gamma_{D}}(\Omega)\),
\[(G(\vec{q},w))(\vec{z},v)=\int_{\Omega}\vec{q}\cdot\vec{z}+w\operatorname{div }\vec{z}+\vec{q}\cdot\nabla v\,dx. \tag{4.14}\]
So far, for the lowest order case of
\[X^{\delta}=\mathcal{S}_{0}^{-1}(\mathcal{T}^{\delta})^{d}\times\mathcal{S}_{0}^{ -1}(\mathcal{T}^{\delta}),\]
we are able to construct a suitable Fortin interpolator taking
\[Y^{\delta}=\big{(}RT_{0}(\mathcal{T}^{\delta})\times\mathcal{S}_{d}^{0}(\mathcal{ T}^{\delta})\big{)}\cap Y. \tag{4.15}\]
We will utilise the Crouzeix-Raviart finite element space
\[CR_{\Gamma_{D}}(\mathcal{T}^{\delta}):=\{w\in\mathcal{S}_{1}^{-1}(\mathcal{T}^{ \delta})\colon\int_{e}[v]_{e}\,ds=0\ (e\in\mathcal{F}(\mathcal{T}^{\delta}),e\not\subset\Gamma_{N})\},\]
where \([v]_{e}\) denotes the jump of \(v\) over \(e\) (with \(v\) extended with zero outside \(\Omega\)). With the abbreviation
\[RT_{\Gamma_{N}}(\operatorname{div}0;\mathcal{T}^{\delta}):=RT_{0}(\mathcal{T} ^{\delta})\cap H_{0,\Gamma_{N}}(\operatorname{div}0;\mathcal{T}^{\delta}),\]
and with \(\nabla_{\mathcal{T}^{\delta}}\) denoting the piecewise gradient, we have the following generalisation of [1, Thm. 4.1] that was restricted to \(d=2\).
**Lemma 4.9** (discrete Helmholtz decomposition).: _It holds that_
\[\mathcal{S}_{0}^{-1}(\mathcal{T}^{\delta})^{d}=RT_{\Gamma_{N}}(\operatorname{ div}0;\mathcal{T}^{\delta})\oplus^{\perp_{L_{2}(\Omega)^{d}}}\nabla_{ \mathcal{T}^{\delta}}CR_{\Gamma_{D}}(\mathcal{T}^{\delta}).\]
Proof.: For \((\vec{q},w)\in RT_{\Gamma_{N}}(\operatorname{div}0;\mathcal{T}^{\delta}) \times CR_{\Gamma_{D}}(\mathcal{T}^{\delta})\), a piecewise integration-by-parts shows that
\[\int_{\Omega}\vec{q}\cdot\nabla_{\mathcal{T}^{\delta}}w\,dx=\sum_{e\in \mathcal{F}(\mathcal{T}^{\delta})}\int_{e}[w]_{e}\,\vec{q}\cdot\vec{n}\,ds=0.\]
It is known that, besides \(\nabla_{\mathcal{T}^{\delta}}CR_{\Gamma_{D}}(\mathcal{T}^{\delta})\), also \(RT_{\Gamma_{N}}(\operatorname{div}0;\mathcal{T}^{\delta})\) is in \(\mathcal{S}_{0}^{-1}(\mathcal{T}^{\delta})^{d}\).
From \(\operatorname{div}\colon RT_{0}(\mathcal{T}^{\delta})\cap H_{0,\Gamma_{N}}( \operatorname{div};\mathcal{T}^{\delta})\to\mathcal{S}_{0}^{-1}(\mathcal{T}^{ \delta})\), and \(\dim\mathcal{S}_{0}^{-1}(\mathcal{T}^{\delta})=\#\mathcal{T}^{\delta}\), one infers
\[\dim RT_{\Gamma_{N}}(\operatorname{div}0;\mathcal{T}^{\delta})\geq\# \mathcal{F}(\mathcal{T}^{\delta})-\#\{e\in\mathcal{F}(\mathcal{T}^{\delta}) \colon e\subset\Gamma_{N}\}-\#\mathcal{T}.\]
From \(\dim CR_{\Gamma_{D}}(\mathcal{T}^{\delta})=\#\mathcal{F}(\mathcal{T}^{\delta}) -\#\{e\in\mathcal{F}(\mathcal{T}^{\delta})\colon e\subset\Gamma_{D}\}\) and \(\nabla_{\mathcal{T}^{\delta}}\) being injective on \(CR_{\Gamma_{D}}(\mathcal{T}^{\delta})\), and \((d+1)\#\mathcal{T}^{\delta}=2\#\mathcal{F}(\mathcal{T}^{\delta})-\#\{e\in \mathcal{F}(\mathcal{T}^{\delta})\colon e\subset\partial\Omega\}\), we conclude that
\[\dim\mathcal{S}_{0}^{-1}(\mathcal{T}^{\delta})^{d}\leq\dim\nabla_{\mathcal{T }^{\delta}}CR_{\Gamma_{D}}(\mathcal{T}^{\delta})+\dim RT_{\Gamma_{N}}( \operatorname{div}0;\mathcal{T}^{\delta}),\]
which completes the proof.
**Theorem 4.10**.: _For \(G\), \(X^{\delta}\), and \(Y^{\delta}\) from (4.14)-(4.15), it holds that_
\[\inf_{0\neq(\vec{q},w)\in X^{\delta}}\frac{\sup_{0\neq(\vec{z},v)\in Y^{ \delta}}\frac{|(G(\vec{q},w))(\vec{z},v)|}{\|(\vec{z},v)\|_{Y}}}{\|G(\vec{q},w )\|_{Y^{\prime}}}\gtrsim 1.^{4}\]
Proof.: We construct a Fortin interpolator \(\Pi^{\delta}\colon Y\to Y^{\delta}\) of the form \(\Pi^{\delta}(\vec{z},v)=(\Pi_{1}^{\delta}\vec{z},\Pi_{2}^{\delta}(\vec{z},v))\).
Let \(P_{0}^{\delta}\) denote the \(H(\operatorname{div};\Omega)\)-bounded projector \(H_{0,\Gamma_{N}}(\operatorname{div};\Omega)\to RT_{0}(\mathcal{T}^{\delta}) \cap H_{0,\Gamma_{N}}(\operatorname{div};\Omega)\) from [1], which has the commuting diagram property
\[\operatorname{ran}\operatorname{div}(\operatorname{Id}-P_{0}^{\delta})\perp_{ L_{2}(\Omega)}\mathcal{S}_{0}^{-1}(\mathcal{T}^{\delta}).\]
With \(Q^{\delta}\) being the \(L_{2}(\Omega)^{d}\)-orthogonal projector onto \(RT_{\Gamma_{N}}(\operatorname{div}0;\mathcal{T}^{\delta})\), we set \(\Pi_{1}^{\delta}=P_{0}^{\delta}+Q^{\delta}(\operatorname{Id}-P_{0}^{\delta}) \in\mathcal{L}\big{(}H_{0,\Gamma_{N}}(\operatorname{div};\Omega),RT_{0}( \mathcal{T}^{\delta})\cap H_{0,\Gamma_{N}}(\operatorname{div};\Omega)\big{)}\).
Writing, for \((\vec{q},w)\in X^{\delta}\), \(\vec{q}=\vec{r}+\nabla_{\mathcal{T}^{\delta}}t\), where \((\vec{r},t)\in RT_{\Gamma_{N}}(\operatorname{div}0;\mathcal{T}^{\delta})\times CR _{\Gamma_{D}}(\mathcal{T}^{\delta})\), the definition of \(\Pi_{1}^{\delta}\), Lemma 4.9, and the fact that \(H_{0,\Gamma_{N}}(\operatorname{div}0;\Omega)\perp_{L_{2}(\Omega)^{d}}\nabla H ^{1}_{0,\Gamma_{D}}(\Omega)\) show that for \((\vec{z},v)\in Y\) it holds that
\[(G(\vec{q},w))((\operatorname{Id}-\Pi)(\vec{z},v))\] \[=\int_{\Omega}(\vec{r}+\nabla_{\mathcal{T}^{\delta}}t)\cdot\left( (\operatorname{Id}-\Pi_{1}^{\delta})\vec{z}+\nabla(v-\Pi_{2}^{\delta}(\vec{z},v)\right)+w\operatorname{div}(\operatorname{Id}-\Pi_{1}^{\delta})\vec{z}\,dx \tag{4.16}\] \[=\int_{\Omega}\nabla_{\mathcal{T}^{\delta}}t\cdot\left(( \operatorname{Id}-P_{0}^{\delta})\vec{z}+\nabla(v-\Pi_{2}^{\delta}(\vec{z},v)) \right)dx.\]
It remains to define \(\Pi_{2}^{\delta}(\vec{z},v)\in\mathcal{S}^{0}_{d}(\mathcal{T}^{\delta})\cap H ^{1}_{0,\Gamma_{D}}(\Omega)\) such that the last expression vanishes for all \(t\in CR_{\Gamma_{D}}(\mathcal{T}^{\delta})\) and \((\vec{z},v)\in Y\). Let \(\tilde{v}\in CR_{\Gamma_{D}}(\mathcal{T}^{\delta})\) solve
\[\int_{\Omega}\nabla_{\mathcal{T}^{\delta}}t\cdot\nabla_{\mathcal{T}^{\delta} }\tilde{v}\,dx=\int_{\Omega}\nabla_{\mathcal{T}^{\delta}}t\cdot\left(( \operatorname{Id}-P_{0}^{\delta})\vec{z}+\nabla v\right)\right)dx\quad(t\in CR _{\Gamma_{D}}(\mathcal{T}^{\delta})).\]
It satisfies
\[\|\nabla_{\mathcal{T}^{\delta}}\tilde{v}\|_{L_{2}(\Omega)^{d}}\leq\|( \operatorname{Id}-P_{0}^{\delta})\vec{z}\|_{L_{2}(\Omega)}+|v|_{H^{1}(\Omega) }\lesssim\|\vec{z}\|_{H(\operatorname{div};\Omega)}+|v|_{H^{1}(\Omega)}.\]
There exists a conforming companion operator \(E_{\mathcal{T}^{\delta}}\colon CR_{\Gamma_{D}}(\mathcal{T}^{\delta})\to \mathcal{S}^{0}_{d}(\mathcal{T}^{\delta})\cap H^{1}_{0,\Gamma_{D}}(\Omega)\) with \(\operatorname{ran}(\nabla E_{\mathcal{T}^{\delta}}-\nabla_{\mathcal{T}^{ \delta}})\perp_{L_{2}(\Omega)^{d}}\mathcal{S}^{-1}_{0}(\mathcal{T}^{\delta})\), and \(\|\nabla E_{\mathcal{T}^{\delta}}\cdot\|_{L_{2}(\Omega)^{d}}\lesssim\|\nabla_ {\mathcal{T}^{\delta}}\cdot\|_{L_{2}(\Omega)}\) on \(CR_{\Gamma_{D}}(\mathcal{T}^{\delta})\) (one can take the operator \(J_{2}\) from [13, Proof of Prop. 2.3], see [10] for a generalisation to \(d\geq 2\)). Defining \(\Pi_{2}^{\delta}(\vec{z},v):=E_{\mathcal{T}^{\delta}}\tilde{v}\), we conclude that (4.16) vanishes for all \(t\in CR_{\Gamma_{D}}(\mathcal{T}^{\delta})\), and that \(\|\Pi_{2}^{\delta}(\vec{z},v)\|_{H^{1}(\Omega)}\lesssim\|\vec{z}\|_{H( \operatorname{div};\Omega)}+\|v\|_{H^{1}(\Omega)}\), so that \(\Pi^{\delta}\in\mathcal{L}(Y,Y^{\delta})\) is a valid Fortin interpolator.
_Remark 4.11_.: Although \(G=(G_{1},G_{2})\in\mathcal{L}\mathrm{is}\big{(}X,H_{0,\Gamma_{N}}(\operatorname {div};\Omega)\times H^{1}_{0,\Gamma_{D}}(\Omega)\big{)}\), in this subsection we did not verify inf-sup stability for \(G_{1}\) and \(G_{2}\) separately to conclude inf-sup stability for \(G\) by Lemma 3.2. The reason is that we did not manage to verify inf-sup stability for \(G_{1}(\vec{q},w)(\vec{z})=\int_{\Omega}\vec{q}\cdot\vec{z}+w\operatorname{div }\vec{z}\,dx\). We notice that in the context of a DPG method, in [11, Sect. 3] inf-sup stability has been demonstrated separately for \(G_{1}\) and \(G_{2}\), even for trial spaces \(X^{\delta}\) of general polynomial degree.
### Preconditioners
At several places, it was desirable or, in case of fractional norms, even essential to have an efficiently evaluable (uniform) preconditioner \(K^{\delta}=K^{\delta^{\prime}}\in\mathcal{L}\mathrm{is}(\mathcal{Z}^{\delta^{ \prime}},Z^{\delta})\) available, where \(Z^{\delta}\) was of one of the following types:
1. \(\mathcal{S}^{0}_{p}(\mathcal{T}^{\delta})\) or \(\mathcal{S}^{0}_{p}(\mathcal{T}^{\delta})\cap H^{1}_{0,\Gamma_{D}}(\Omega)\) equipped with \(\|\cdot\|_{H^{1}(\Omega)}\),
2. \(\mathcal{S}^{-1}_{p}(\{e\in\mathcal{F}(\mathcal{T}^{\delta})\colon e\subset \Gamma_{D}\})\) equipped with \(\|\cdot\|_{\widehat{H}^{-\frac{1}{2}}(\Gamma_{D})}\)'
3. \(\mathcal{S}^{0}_{p}(\{e\in\mathcal{F}(\mathcal{T}^{\delta})\colon e\subset \Gamma_{N}\})\) equipped with \(\|\cdot\|_{H^{\frac{1}{2}}_{00}(\Gamma_{N})}\)'
4. \(RT_{0}(\mathcal{T}^{\delta})\cap H_{0,\Gamma_{N}}(\operatorname{div};\Omega)\) equipped with \(\|\cdot\|_{H(\operatorname{div};\Omega)}\).
When \(\mathcal{T}^{\delta}\) is constructed from recurrent refinements by a fixed refinement rule starting from a fixed coarse partition, multi-level preconditioners of linear computational complexity are available for all four cases (see [13] for Case (ii), and [16, AFW07] or [15] for Case (iv)). Alternatives for the fractional Sobolev norms are provided by 'operator preconditioners' (see [10, SvV20a, SvV20b]).
## 5. Numerical experiments
On a square domain \(\Omega=(0,1)^{2}\) with Neumann and Dirichlet boundaries \(\Gamma_{N}=\{0\}\times[0,1]\) and \(\Gamma_{D}=\overline{\partial\Omega\setminus\Gamma_{N}}\), for \(g\in H^{1}_{0,\Gamma_{D}}(\Omega)^{\prime}\), \(h_{D}\in H^{\frac{1}{2}}(\Gamma_{D})\), and \(h_{N}\in H^{-\frac{1}{2}}(\Gamma_{N})\) we consider the Poisson problem of finding \(u\in H^{1}(\Omega)\) that satisfies
\[\left\{\begin{array}{rl}-\Delta u\,=&g\quad\text{ on }\Omega,\\ u\,=&h_{D}\quad\text{ on }\Gamma_{D},\\ \nabla u\cdot\vec{n}\,=&h_{N}\quad\text{ on }\Gamma_{N}.\end{array}\right.\]
In particular, we take \(g=0\), \(h_{D}(x,y)=\cos\frac{\pi x}{2}\), and \(h_{N}=1\). Hence because of the incompatibility of the Dirichlet and Neumann data at \(\Gamma_{D}\cap\Gamma_{N}\), the pair of the gradient of the solution and the solution \((\vec{p},u):=(\nabla u,u)\) has (mild) singularities at the points \((0,0)\) and \((0,1)\).
We consider above problem in the first order ultra-weak formulation from Example 2.5. We consider a family of conforming triangulations \(\{\mathcal{T}^{\delta}\}_{\delta}\) of \(\Omega\), where each triangulation is created using newest vertex bisections starting from an initial triangulation that consists of \(4\) triangles created by cutting \(\Omega\) along its diagonals. The interior vertex of the initial triangulation is labelled as the 'newest vertex' of all four triangles in this initial mesh. Given some polynomial degree \(p\in N_{0}\), we set
\[X^{\delta}:=\mathcal{S}_{p}^{-1}(\mathcal{T}^{\delta})^{d}\times\mathcal{S}_{ p}^{-1}(\mathcal{T}^{\delta}).\]
With \((G(\vec{p},\underline{u}))(\vec{u},\lambda):=\int_{\Omega}\vec{p}\cdot\vec{n} +\underline{u}\,\mathrm{div}\,\vec{\mu}+\vec{p}\cdot\nabla\lambda\,dx\), for a suitable finite dimensional subspace \(Y^{\delta}=Y^{\delta}(X^{\delta})\subset Y:=H_{0,\Gamma_{N}}(\mathrm{div}; \Omega)\times H^{1}_{0,\Gamma_{D}}(\Omega)\) the practical MINRES method computes \((\vec{p}^{\delta},u^{\delta},\vec{\mu}^{\delta},\lambda^{\delta})\in X^{ \delta}\times Y^{\delta}\) such that
\[\langle(\vec{\mu}^{\delta},\lambda^{\delta}),(\vec{\underline{u} }^{\delta},\lambda^{\delta})\rangle_{H(\mathrm{div};\Omega)\times H^{1}( \Omega)}+(G(\vec{p}^{\delta},u^{\delta}))(\vec{\underline{u}}^{\delta}, \lambda^{\delta})+(G(\vec{p}^{\delta},\underline{u}^{\delta}))(\vec{\mu}^{ \delta},\lambda^{\delta})\\ =\int_{\Gamma_{D}}h_{D}\vec{\underline{u}}^{\delta}\cdot\vec{n} \,ds+g(\lambda^{\delta})+\int_{\Gamma_{N}}h_{N}\underline{\lambda}^{\delta}\, ds=:f(\vec{\underline{u}}^{\delta},\lambda^{\delta})\]
for all \((\vec{p}^{\delta},\underline{u}^{\delta},\vec{\mu}^{\delta},\lambda^{\delta}) \in X^{\delta}\times Y^{\delta}\).
As we have seen, when \(Y^{\delta}\) is selected such that
\[\gamma^{\delta}=\inf_{0\neq(\vec{p}^{\delta},u^{\delta})\in X^{\delta}}\frac{ \sup_{0\neq(\vec{p}^{\delta},\lambda^{\delta})\in Y^{\delta}}\frac{|(G(\vec{p}^{ \delta},u^{\delta})(\vec{n}^{\delta},\lambda^{\delta})|}{\|(\vec{p}^{\delta}, \lambda^{\delta})\|_{Y}}}{\|G(\vec{p}^{\delta},u^{\delta})\|_{Y}}\ \gtrsim 1,\]
then \((\vec{p}^{\delta},u^{\delta})\) is a quasi-best approximation from \(X^{\delta}\) to \((\vec{p},u^{\delta})\) w.r.t. the norm on \(X:=L_{2}(\Omega)^{d}\times L_{2}(\Omega)\).
For \(p\in\mathbb{N}_{0}\), we take
\[Y^{\delta}:=\big{(}RT_{p}(\mathcal{T}^{\delta})\times\mathcal{S}^{0}_{d+p}( \mathcal{T}^{\delta})\big{)}\cap Y,\]
where thus \(d=2\). Theorem 4.10 shows that for \(p=0\) above uniform inf-sup condition is satisfied. Using that, thanks to \(G\in\mathcal{L}\mathrm{is}(X,Y^{\delta})\),
\[\gamma^{\delta}\eqsim\tilde{\gamma}^{\delta}:=\inf_{0\neq(\vec{p}^{\delta},u^ {\delta})\in X^{\delta}}\sup_{0\neq(\vec{n}^{\delta},\lambda^{\delta})\in Y^{ \delta}}\frac{|(G(\vec{p}^{\delta},u^{\delta})(\vec{n}^{\delta},\lambda^{ \delta})|}{\|(\vec{n}^{\delta},\lambda^{\delta})\|_{Y}\|(\vec{p}^{\delta},u^{ \delta})\|_{X}},\]
for \(p\in\{1,2,3,4\}\) we verified numerically whether our choice of \(Y^{\delta}\) gives inf-sup stability. The results given in Figure 2 indicate that this is the case.
The practical MINRES method comes with a built-in a posteriori error estimator given by \(\mathcal{E}(\vec{p}^{\delta},u^{\delta},f)=\sqrt{\sum_{T\in\mathcal{T}^{ \delta}}\|\vec{n}^{\delta}\|^{2}_{H(\mathrm{div};T)}+\|\lambda^{\delta}\|^{2}_{ H^{1}(T)}}\) (see Remark 3.9). For \(p\in\{0,1,2,3\}\) we performed numerical experiments with uniform and adaptively refined triangulations. Concerning the latter, we have used the element-wise error indicators \(\sqrt{\|\vec{n}^{\delta}\|^{2}_{H(\mathrm{div};T)}+\|\lambda^{\delta}\|^{2}_{ H^{1}(T)}}\) to drive an AFEM with Dorfler marking with marking parameter \(\theta=0.6\). We have seen that the estimator \(\mathcal{E}(\vec{p}^{\delta},u^{\delta},f)\) is efficient, but because the data-oscillation term can be of the order of the best approximation error, it is not necessarily reliable. Therefore instead of using the a posteriori error estimator to assess the quality of our MINRES method, as a measure for the error we computed the \(X\)-norm of the difference with the MINRES solution for \(p=4\) on the same triangulation, denoted as \((\vec{p}^{\delta}_{4},u^{\delta}_{4})\). The results given in Figure 3 show that for uniform refinements increasing \(p\) does not improve the
order of convergence, due to the limited regularity of the solution in the Hilbertian Sobolev scale.
The results indicate that the solution is just in \(H^{2}(\Omega)\). Furthermore we see that adaptivity does not yield improved convergence rates. We expect that the reason for the latter is that, with our current choice of \(Y^{\delta}\), the data oscillation term dominates our error estimator, so that the local error indicators do not provide the correct information where to refine.
For this reason, we repeat the experiment from Figure 3 using the higher order test space
\[Y^{\delta}:=\big{(}RT_{p+1}(\mathcal{T}^{\delta})\times\mathcal{S}^{0}_{d+p+1} (\mathcal{T}^{\delta})\big{)}\cap Y.\]
Now we observe that the a posteriori error estimator is proportional (and actually quite close) to the error notion \(\|(\vec{p}_{4}^{\delta},u_{4}^{\delta})-(\vec{p}^{\delta},u^{\delta})\|_{X}\), and so we expect it indeed to be also reliable. In Figure 4 we give the number of DoFs vs. \(\mathcal{E}\big{(}\vec{p}^{\delta},u^{\delta},f\big{)}\). As expected, the rates for uniform refinements are as before, but now we observe for the adaptive routine the generally best possible rates allowed by the order of approximation of \(X^{\delta}\).
## 6. Conclusion
In MINRES discretisations of PDEs often parts of the residual are measured in fractional or negative Sobolev norms. In this paper a general approach has been
presented to turn such an 'impractical' MINRES method into a practical one, without compromising quasi-optimality of the obtained numerical approximation, assuming that the test space that is employed is chosen such that a (uniform) inf-sup condition is valid. The resulting linear system is of a symmetric saddle-point form, but can be replaced by a symmetric positive definite system by the application of a (uniform) preconditioner at the test space, while still preserving quasi-optimality. For four different formulations of scalar second order elliptic PDEs, the aforementioned uniform inf-sup condition has been verified for pairs of finite element trial and test spaces. Numerical results have been presented for an ultra-weak first order system formulation of Poisson's problem that allows for a very convenient treatment of inhomogeneous mixed Dirichlet and Neumann boundary conditions.
| 数値近似のために、偏微分方程式を残差最小化問題に reformulation することは、結果となる線形システムが対称正定値であるという利点と、残差のノルムが、事前エラー estiamtorであるという利点がある。さらに、一般の境界条件を扱うことができる。多くの最小残差表現では、残差の1つ以上の項が負または分数でソボレフノルムで測定される。この作業では、これらのノルムを効率的に評価できる式で置き換えるための一般的なアプローチを提供する。これにより、結果となる数値解の quasi-optimality を損なうことなく、数値解を評価する。このアプローチを、4つのモデル2次微分方程式の表現に適用することで、必要な inf-sup 条件を検証する。ポアソン問題の1つの例として、混合の非線形境界条件を持つディリッヒ-ノイマン境界条件 |
2305.02577 | Text Reading Order in Uncontrolled Conditions by Sparse Graph
Segmentation | Text reading order is a crucial aspect in the output of an OCR engine, with a
large impact on downstream tasks. Its difficulty lies in the large variation of
domain specific layout structures, and is further exacerbated by real-world
image degradations such as perspective distortions. We propose a lightweight,
scalable and generalizable approach to identify text reading order with a
multi-modal, multi-task graph convolutional network (GCN) running on a sparse
layout based graph. Predictions from the model provide hints of bidimensional
relations among text lines and layout region structures, upon which a
post-processing cluster-and-sort algorithm generates an ordered sequence of all
the text lines. The model is language-agnostic and runs effectively across
multi-language datasets that contain various types of images taken in
uncontrolled conditions, and it is small enough to be deployed on virtually any
platform including mobile devices. | Renshen Wang, Yasuhisa Fujii, Alessandro Bissacco | 2023-05-04T06:21:00 | http://arxiv.org/abs/2305.02577v1 | # Text Reading Order in Uncontrolled Conditions by Sparse Graph Segmentation
###### Abstract
Text reading order is a crucial aspect in the output of an OCR engine, with a large impact on downstream tasks. Its difficulty lies in the large variation of domain specific layout structures, and is further exacerbated by real-world image degradations such as perspective distortions. We propose a lightweight, scalable and generalizable approach to identify text reading order with a multi-modal, multi-task graph convolutional network (GCN) running on a sparse layout based graph. Predictions from the model provide hints of bidimensional relations among text lines and layout region structures, upon which a post-processing cluster-and-sort algorithm generates an ordered sequence of all the text lines. The model is language-agnostic and runs effectively across multi-language datasets that contain various types of images taken in uncontrolled conditions, and it is small enough to be deployed on virtually any platform including mobile devices.
Keywords:Multi-modality, bidimensional ordering relations, graph convolutional networks.
## 1 Introduction
Optical character recognition (OCR) technology has been developed to extract text reliably from various types of image sources [4]. Key components of an OCR system include text detection, recognition and layout analysis. As machine learning based digital image processing systems are nowadays ubiquitous and widely applied, OCR has become a crucial first step in the pipeline to provide text input for downstream tasks such as information extraction, text selection and screen reading.
Naturally, most image-to-text applications require very accurate OCR results to work well. This requirement is not only on text recognition -- reading order among the recognized text lines is almost always as important as the recognition quality. The reason is self-evident for text selection (copy-paste) and text-to-speech tasks. And for structured document understanding like LayoutLM [33], DocFormer [3], FormNet [18], etc., the order of the input text also has a profound effect as most of these models have positional encoding attached to input text features, and a sequential labeling task for output. Input text order can sometimes be the key factor for the successful extraction of certain entities.
Depending on the text layout, the difficulty of deciding its reading order varies greatly. It can be as simple as sorting all the text lines by y-coordinates, but can also be hard like the images in Figure 1. Even if we exclude corner cases like these, there are still complexities brought by the diversity of layout structures which are often domain specific. Previous studies have tackled the problem in different ways. Rule based approaches like [1, 27, 9] usually aim at one specific domain, while learning based approaches like [6, 21, 32] are more general but have scalability issues (more discussions in the following section).
In this paper, we propose a composite method that uses both machine learning model and rule based sorting to achieve best results. It is based on the observation from [1] that most reading order sequences are in one of the two patterns -- column-wise and row-wise -- as illustrated in Figure 2.
We use a graph convolutional network that takes spatial-image features from the input layout and image, and segments the layout into two types of regions where the paragraphs can be properly sorted by the type of their patterns. A \(\beta\)-skeleton graph built on boxes [31] enables efficient graph convolutions while also providing edge bounding boxes for RoI (regions of interest) pooling from the image feature map. A post-processing cluster-and-sort algorithm finalizes the overall reading order based on model predictions. This unique combination gives us an effective, lightweight, scalable and generalizable reading order solution.
## 2 Related Work
Two types of related work are discussed in this section. The first subsection includes previous reading order efforts, and the second subsection discusses other multi-modal image-text-spatial models that share some of the components with our approach.
### Reading Order Detection
Previous studies have tackled the reading order problem in various ways. We roughly categorize them into rule based sorting [5, 1, 27, 9] and machine-learning based sequence prediction [6, 21, 32, 29], etc.
Figure 1: Hard examples for text reading order. (a) A cropped image of a menu with dish names and prices, where a correct reading order necessarily needs correct association between each dish name and its price, which is a hard task for humans without the full image context due to the perspective distortion in the image. (b) A text layout intentionally made to have two different reading order interpretations, both valid, but with completely opposite meanings.
Topological sort was proposed in [5] for document layout analysis where partial orders are based on x/y interval overlaps among text lines. It can produce reading order patterns like Figure 2 (a) for multi-column text layouts. A bidimensional relation rule proposed in [1] provides similar topological rules, and in addition provides a row-wise rule by inverting the x/y axes from column-wise. An argumentation based approach in [9] works on similar rules derived from text block relations. For large text layout with hierarchies, XY-Cut [27; 13] can be an effective way for some layout types to order all the text blocks top-to-bottom and left-to-right. These rule based approaches can work accurately for documents in certain domains. But without extra signals, they will fail for out-of-domain cases like Figure 2 (b).
Machine learning based approaches are designed to learn from training examples across different domains to enable a general solution. The data mining approach in [6] learns partial order among text blocks from their spatial features and identifies reading order chains from the partial orders. A similar approach in [29] trains a model to predict pairwise order relations among text regions and curves for handwritten documents. The major limitation is that any partial order between two entities are derived from their own spatial features without the layout structure information in their neighborhood. So these models may not be able to identify the layout structure among a group of text lines and therefore fail to find the correct pattern.
Graph convolutional networks and transformer models provide mechanisms for layout-aware signals by interactions between layout entities. A text reorganization model introduced in [21] uses a graph convolutional encoder and a pointer network decoder to reorder text blocks. With a fully-connected graph at its input, the graph encoder functions similarly as a transformer encoder. Image features are added to graph nodes by RoI pooling on node boxes with bilinear interpolation. Another work LayoutReader [32] uses a transformer based architecture on spatial-text features instead of spatial-image features to predict reading order sequence on words. The text features enable it to use the power
Figure 2: Two major patterns of reading order. (a) Column-wise order, most common in printed media like newspapers and magazines. (b) Row-wise order, usually in receipts, forms and tabular text blocks.
ful LayoutLM [34] model, but also make it less generalizable. These models are capable of predicting reading order within complex layout structures. However, there are scalability issues in two aspects:
* Run time scales quadratically with input size. Whether in the graph convolutional encoder with full connections or the sequence pointer decoder, most of the components have \(O(n^{2})\) time complexity, and may become too slow for applications with dense text.
* Accuracy scales inversely with input size. The fully-connected self-attention mechanism in the encoder takes all the text entities to calculate a global attention map, which introduces noises to the reading order signals that should be decidable from local layout structures. The sequence decoder uses softmax probabilities to determine the output index for each step, where the output range increases with input size, and so does the chance of errors. Figure 10 illustrates this limitation from our experiments.
To summarize briefly, there are multiple effective ways to order OCR text by rule based or machine learning based methods, and in both categories there is room for improvement in generalizability and scalability.
### Spatial, Image Features and Multi-Modality
Multi-modal transformer models have become mainstream for document or image understanding tasks. Related work include LayoutLM [34, 33, 15, 13], DocFormer [3], SelfDoc [22], UDoc [12], StrucText [23], TILT [28], LiLT [30], FormNet [18], PaLI [7], etc.
Document image understanding starts with an OCR engine that provides text content as the main input for the language model. Alongside, the text bounding boxes associated with the words and lines provide important spatial features (sometimes called layout features or geometric features). Additionally, since not all visual signals are captured by the OCR engine, an image component in the model can help cover the extra contextual information from the input. Thus, a model to achieve best results should take all of the three available modalities.
For image features, most previous studies use RoI pooling [8] by the text bounding boxes from OCR, and the pooled features are attached to the corresponding text entity. It is effective for capturing text styles or colors, but less so for visual cues out of those bounding boxes, such as the curly separation lines in Figure 3. While it is possible to use an image backbone with large receptive fields, like ResNet50 used in the UDoc model or U-Net used in the TILT model, it is not an ideal solution for two reasons:
* In sparse documents, useful visual cues can be far from any text on the page.
* Large receptive fields bring in extra noise from regions irrelevant to the features we need.
Thus, it will be more effective to have image RoI boxes that cover pairs of text bounding boxes. A sparse graph like \(\beta\)-skeleton used in [31] can provide the node pairs for such RoI pooling without significantly increasing the model's memory footprint and computational cost.
## 3 Proposed Method
Based on previous studies, we design a lightweight machine learning based approach with a model that is small in size, fast to run, and easy to generalize in uncontrolled conditions.
### Strong Patterns of Reading Order
From a set of real-world images annotated with reading order, we have an observation that matches very well with the bidimensional document encoding rules in [1] -- column-wise text usually has a zigzag pattern of Figure 2 (a), and row-wise text has a similar but transposed zigzag like Figure 2 (b). Some images may contain both types of text, which makes the pattern more complex. But once the column-wise/row-wise type of a text region is decided, the reading order in this region mostly follows the pattern and can be determined with a topological sort according to the bidimensional rules. Figure 7 (a) shows an example of an annotated reading order sequence.
Based on this observation, learning text reading order becomes an image segmentation problem, as opposed to learning arbitrary global sequences of text entities. Instead of predicting the next entity in the entire image, we do a binary classification for each text entity on whether it's in a column-wise or row-wise pattern. Moreover, the pattern classification for a text line can be decided by local layout structures, and global attention maps are therefore unnecessary.
### Model Architecture
We use a graph convolutional network (GCN) with a sparse graph construction because of the three major advantages listed here:
* GCN models are equivariant to input order permutations. It is natural to assume that a model deciding reading order should not depend on the order of its input.
* With a sparse graph like \(\beta\)-skeleton, GCN computation scales linearly with input size.
* Graph edges constructed from text boxes can provide edge bounding boxes, which are better for image feature RoI pooling (Figure 3, Table 2).
As illustrated in Figure 4, we use an MPNN [11] variant of GCN as the main model backbone, and a \(\beta\)-skeleton graph [17] constructed with text line boxes as nodes. Similar configurations have been applied to other layout problems [19, 31, 25, 18], and graph construction details are available in [31]. The main GCN input is from the spatial features of text line bounding boxes as node features, including \(x\), \(y\) coordinate values of the box corners, and the coordinate values multiplied by rotation angle coefficients \(\cos\alpha\), \(\sin\alpha\). The spatial features go through \(T\) steps of graph convolution layers, each containing a node-to-edge "message passing" layer and edge-to-node aggregation layer with attention weighted pooling.
Besides the main input from nodes, we add a side input of edge features from edge box RoI pooling on an image feature map to help capture potential visual cues surrounding text boxes. We use MobileNetV3-Small [14] as the image backbone for its efficiency. Note that the purpose of this image backbone is not for a major task like object detection, but to look for auxiliary features like separation lines and color changes, so a small backbone is capable enough for our task. For the same reason, we reduce the MobileNetV3 input image size to 512\(\times\)512 to speed up training and inference. The details of the image processing are illustrated in Figure 5. In most cases, the text content is no longer
Figure 4: Overview of the reading order multi-classifier model. Node classification predicts the reading order patterns, and edge classification predicts paragraph clustering.
Figure 3: A cropped example of a \(\beta\)-skeleton graph [31] constructed from text line boxes. Graph node boxes are shown in green and edge lines in cyan. The three orange colored text lines demonstrate how image features can help — the 2nd and 3rd boxes are closer in distance, so spatial features may indicate they are in the same section, but the curly separation line between them indicates otherwise. The yellow box at the bottom is the minimum containing box of the two line boxes inside, where the RoI pooling can cover image features between these lines.
recognizable after such downsizing, but the auxiliary image features can be well preserved. We also make sure that the entire layout is contained in a circle of diameter 512 within the processed image, which enables random rotations during model training -- a key augmentation for our model to work in all conditions.
Language features are not included in order to keep the model minimal in size and independent of domain knowledge. Also, our annotated reading order data is limited in English only, upon which we try to train a universal model.
The GCN is a multi-task model that outputs both node and edge predictions. At node level, it predicts the reading order pattern on each line box (column-wise or row-wise). These predictions are essentially a segmentation for text regions where the lines can be sorted accordingly.
At edge level, the model predicts whether the two lines connected by an edge belong to the same paragraph. Thus, it works like the edge clustering models in [31, 25], and we can improve the final reading order by grouping lines together within each paragraph. The reading order estimation by the grouping usually do not affect column-wise order among text lines, but can be critical in row-wise regions such as tables or forms with multi-line cells, e.g. Figure 9 (d).
It may be considered that a fully convolutional network can do similar segmentation tasks like [26, 16] on the input image. However, we have observed that such models are less effective for certain types of text content -- e.g. in Figure 2 (b), similar lines in the left column are grouped into a large paragraph, disrupting the row-wise reading order.
Figure 5: Image processing for the MobileNetV3 input. The inner yellow box is the minimum containing box of all the text lines in the image. If its diagonal \(d\) is larger than 512, we scale down the image by \(\frac{512}{d}\) so that all the text bounding boxes are contained in the white circle of diameter 512, and then we crop (maybe also pad) around this circle to get the final processed image. This process ensures that both the image and the layout can be randomly rotated during training without any line box moved out of boundary.
### Recovering Reading Order from Model Predictions
With the \(\beta\)-skeleton graph that provides local connections among dense text boxes, the GCN model predicts on _local_ properties of the text, which can be aggregated to give us a _global_ reading order. To handle mixed column-wise and row-wise predictions as well as potential text rotations and distortions in the input image, we extend the rule based sorting in [1, 5] and propose a hierarchical cluster-and-sort algorithm to recover the global reading order from line-level pattern predictions and clustered paragraphs. The following Algorithm 1 generates a set of clusters, each cluster \(c_{i}\) contains a non-empty set of paragraphs and maybe a set of child clusters. Each cluster is also assigned a reading order pattern \(R(c_{i})\in\{\mathit{col},\mathit{row}\}\), with _col_ for column-wise and _row_ for row-wise.
Row-wise text often involves sparse tables with components not directly connected by \(\beta\)-skeleton edges, so the hop edges like in [25] can be helpful in step 4 of algorithm 1. More details can be added, e.g. setting an edge length threshold in step 3 to avoid merging distant clusters.
```
1. Cluster lines into paragraphs \(p_{1},...,p_{n}\) from edge predictions.
2. Each paragraph is initialized as a cluster, \(c_{i}=\{p_{i}\}\). Reading order pattern \(R(c_{i})\) is the majority vote from the paragraph's line predictions.
3. For each edge \((i,j)\in G\), find cluster \(c_{a}\) containing line \(i\) and \(c_{b}\) containing line \(j\); if \(R(c_{a})=R(c_{b})=col\), merge \(c_{a}\) and \(c_{b}\) into a bigger column-wise cluster.
4. For each edge \((i,j)\in G\) or hop edge \((i,j)\) (\(\exists k\) that \((i,k)\in G\) and \((k,j)\in G\)), find cluster \(c_{a}\) containing line \(i\) and \(c_{b}\) containing line \(j\); if \(R(c_{a})=R(c_{b})=\mathit{row}\), merge \(c_{a}\) and \(c_{b}\) into a bigger row-wise cluster.
5. Calculate the containing box for each cluster. The rotation angle of the box is the circular mean angle of all the paragraphs in the cluster.
6. Sort the clusters by ascending area of their containing boxes.
7. For each cluster \(c_{i}\), if its containing box \(B(c_{i})\) overlaps with \(B(c_{j})\) by area greater than \(T\times Area(B(c_{i}))\), set \(c_{i}\) as a child cluster of \(c_{j}\).
8. Create a top level cluster with all the remaining clusters as its children.
```
**Algorithm 1**Hierarchical Clustering
Once the regions of reading order patterns are decided by the hierarchical clusters, we can use topological sort within each cluster as in Algorithm 2.
```
1. Cluster lines into paragraphs \(p_{1},...,p_{n}\) from edge predictions.
2. Each paragraph is initialized as a cluster, \(c_{i}=\{p_{i}\}\). Reading order pattern \(R(c_{i})\) is the majority vote from the paragraph's line predictions.
3. For each edge \((i,j)\in G\), find cluster \(c_{a}\) containing line \(i\) and \(c_{b}\) containing line \(j\); if \(R(c_{a})=R(c_{b})=col\), merge \(c_{a}\) and \(c_{b}\) into a bigger column-wise cluster.
4. For each edge \((i,j)\in G\), find cluster \(c_{a}\) containing line \(i\) and \(c_{b}\) containing line \(j\); if \(R(c_{a})=R(c_{b})=\mathit{row}\), merge \(c_{a}\) and \(c_{b}\) into a bigger row-wise cluster.
5. Calculate the containing box for each cluster. The rotation angle of the box is the circular mean angle of all the paragraphs in the cluster.
6. Sort the clusters by ascending area of their containing boxes.
7. For each cluster \(c_{i}\), if its containing box \(B(c_{i})\) overlaps with \(B(c_{j})\) by area greater than \(T\times Area(B(c_{i}))\), set \(c_{i}\) as a child cluster of \(c_{j}\).
8. Create a top level cluster with all the remaining clusters as its children.
```
**Algorithm 2**Hierarchical Clustering
With all the clusters sorted, an ordered traversal of the cluster hierarchy can give us the final reading order among all the paragraphs. Figure 6 shows the reading order on a packaging box at different camera angles. Note that the algorithms are not sensitive to bounding box angles, and the model is trained with randomly augmented data, so the rotation has minimal effect on the final result. It can even handle vertical text lines in Chinese/Japanese with the vertical lines regarded as rotated horizontal lines.
### Data Labeling
We prepared a dataset with human annotated layout data, including paragraphs as polygons and reading order groups where each group is an ordered sequence of paragraphs. Figure 7 (a) shows a set of paragraphs, where the reading order starts with the green paragraph and follows the jagged line.
Figure 6: Reading order example at different angles. Paragraphs with column-wise pattern predictions are shown in yellow, row-wise in pink. The dark blue line shows the overall reading order among all paragraphs.
```
Input: A sequence of ground truth paragraphs \(p_{1},p_{2},p_{3},\cdots,p_{n}\) represented as rectangular boxes.
1. Between each consecutive pair of paragraphs \((p_{i},p_{i+1})\), we categorize their geometrical relation \(R_{i,i+1}\) as one of \(\{\textit{vertical},\textit{horizontal},\textit{unknown}\}\). 1. Calculate \(\alpha\), the circular mean angle of the two boxes' rotation angles. 2. Rotate the boxes of \(p_{i}\) and \(p_{i+1}\) around (0, 0) by \(-\alpha\), denoted as \(b_{i}\) and \(b_{i+1}\). 3. Axis aligned box \(c\) is the minimum containing box of both \(b_{i}\) and \(b_{i+1}\). 4. if \(y_{\textit{overlap}}(b_{i},b_{i+1})<0.1\cdot\textit{height}(c)\) and \(y_{\textit{center}}(b_{i})<y_{\textit{center}}(b_{i+1})\) \(R_{i,i+1}=\textit{vertical}\) 5. else if \(x_{\textit{overlap}}(b_{i},b_{i+1})<0.1\cdot\textit{width}(c)\) and \(x_{\textit{center}}(b_{i})<x_{\textit{center}}(b_{i+1})\) if \(c\) does not cover paragraphs other than \(p_{i}\), \(p_{i+1}\) \(R_{i,i+1}=\textit{horizontal}\) # mostly tabular structures else \(R_{i,i+1}=\textit{vertical}\) # mostly multi-column text 6. In other conditions, \(R_{i,i+1}=\textit{unknown}\)
2. Decide the reading order pattern for paragraph \(p_{i}\) from \(R_{i-1,i}\) and \(R_{i,i+1}\). 1. (_unknown_, _unknown_) \(\rightarrow\)_unknown_ 2. In case of one unknown, the other one decides the pattern: \(\textit{vertical}\rightarrow\) column-wise, _horizontal_\(\rightarrow\) row-wise,. 3. If neither is unknown, \((\textit{vertical},\textit{vertical})\rightarrow\) column-wise, otherwise it is row-wise.
```
**Algorithm 3**Pattern Labeling from Annotated Reading Order
Figure 7: Labeling reading order patterns from annotations. (a) Ground truth from human annotated paragraphs and reading order. (b) Reading order pattern inferred from the annotated sequence — column-wise indicated by a vertical/purple line in each paragraph and row-wise by a horizontal/green line.
While the edge clustering labels are straightforward from the paragraph polygons, the reading order pattern labeling is less trivial because we need to derive binary labels from ground truths of paragraph ordering. We decide the pattern of a paragraph by comparing its position with its predecessor and successor. Figure 7 (b) shows an example, and detailed logic is elaborated in Algorithm 3.
### Limitations
The node-edge classification model can produce reasonable reading order in most cases, but may fail for complex layouts with multiple tabular sections placed closely, like the cross section errors in Figure 11 (a). The root cause is the lack of higher level layout structure parsing with the two classification tasks. Data annotation at section level is generally hard because there is no universal agreement on the exact definition of sections among text. Figure 11 (b) shows the result with extra section level clustering trained on a domain specific dataset. There is significant improvement, yet cross domain generalization is not guaranteed, and we can still see imperfections in the multi-section reading order due to section prediction errors.
Another limitation is that our model is not a reliable source for parsing table structures like [24]. Figure 8 shows the reading order result of the image in Figure 1 (a). Note that in the sorting algorithm, we rotate all the bounding boxes to zero out their mean angle. But when the boxes are at different angles due to distortions, there will still be slanted line boxes and misaligned table rows after all the rotations, so the topological sort on the axis-aligned containing boxes cannot guarantee the right order. In presence of tables, a separate model with structure predictions will likely perform better.
## 4 Experiments
We experiment with the GCN model with predictions on reading order pattern and paragraph clustering, together with the cluster-and-sort algorithms.
Figure 8: Cluster-and-sort result on the cropped menu from Fig. 1 (a). Although the model correctly predicts the row-wise pattern, reading order is still incorrect due to the perspective distortion and the unusually large spacing between the two columns.
### Datasets and Evaluation Metrics
Various metrics have been used to evaluate reading order, such as Spearman's footrule distance, Kendall's Tau rank distance used in [29] and BLEU scores in [21]. These metrics can accurately measure order mismatches, but also require full length ground truth order for comparison.
We created an annotated layout dataset where reading order ground truths are partially annotated, i.e. some subsets of paragraphs form reading order groups with annotated order, and the order among groups is undefined. This makes it more flexible to match realistic user requirements and less suitable for full ranking metrics. So instead, we use a normalized Levenshtein distance [20] which measures the minimum number of word operations (insertions and deletions) needed to equalize two lists. For each reading order group, we take the ordered list of paragraphs and find all the OCR words \(W\) contained in these polygons. The word order within each paragraph is taken directly from OCR (mostly accurate for a single paragraph). Then we find the shortest subsequence of the serialized OCR output that contains all the words in \(W\), compute its Levenshtein distance to \(W\), and multiply it by the normalization factor \(1/|W|\).
Besides our annotated set, we test the model with PubLayNet [35] because of its variety on layout components with different reading order patterns. Although there is no ground truth of reading order, we take "text" instances as paragraphs with column-wise pattern, and "table"/"figure" types as containers of text lines with row-wise pattern. Thus, we are able to train the same multi-task GCN model. The annotated set contains 25K text images in English for training and a few hundred test images for each of the available languages, and PubLayNet contains 340K training images and 12K validation images all in English.
### Model Setup
The model is built as shown in Figure 4, with the OCR engine from Google Cloud Vision API producing text lines and their spatial features. Edge image features
Figure 9: Reading order results from (a) PubLayNet [35], (b) PRIMA Contemporary dataset [2], (c) the ambiguous example from Figure 1 with a positive interpretation, and (d) our evaluation set.
are from a bi-linear interpolation on the MobileNetV3 output with \(16\times 3\) points each box and dropout rate 0.5. The TF-GNN [10] based GCN backbone uses 10 steps of weight-sharing graph convolutions, with node feature dimension 32 and message passing hidden dimension 128. Edge-to-node pooling uses a 4-head attention with 3 hidden layers of size 16 and dropout rate 0.5. Total number of parameters is 267K including 144K from MobileNetV3-Small.
We train the model for 10M steps with randomized augmentations including rotation and scaling, so the model can adapt to a full range of inputs. The OCR boxes are transformed together with the image in each training example, resulting in better robustness than previous approaches (Figure 6).
### Baselines
Most commercial OCR systems use a topological sort like in [1] with one of the two patterns. We use column-wise pattern in the basic baseline as it produces better scores than row-wise in our evaluations, and is close to the default output order from the OCR engine we use.
In addition, we implement a GCN model that directly predicts edge directions on a fully connected graph similar to the model in [21]. Figure 10 shows two examples with comparison between this baseline and our approach, with supports the scalability discussion in subsection 2.1.
### Results
We train the multi-task model with PubLayNet and our paragraph reading order set added with the menu photos labelled from human annotations. From Table 1, we can see the difference in the difficulty between the two sets. Real-world images from our dataset have much larger variations on layout styles and image degradations that make the same tasks much harder to learn.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Dataset & \multicolumn{3}{c}{Reading order pattern} & \multicolumn{3}{c}{Paragraph clustering} \\ & Precision & Recall & F1 & Precision & Recall & F1 \\ \hline PubLayNet & 0.998 & 0.995 & 0.997 & 0.994 & 0.996 & 0.995 \\ Annotated ordered paragraphs & 0.828 & 0.805 & 0.819 & 0.895 & 0.909 & 0.902 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Scores of the two classification tasks on PubLayNet and our labelled paragraph reading order dataset.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Boxes for image & \multicolumn{3}{c}{Reading order pattern} & \multicolumn{3}{c}{Paragraph clustering} \\ feature RoI pooling & Precision & Recall & F1 & Precision & Recall & F1 \\ \hline n/a & 0.800 & 0.803 & 0.802 & 0.887 & 0.895 & 0.891 \\ Node boxes & 0.819 & 0.781 & 0.800 & 0.870 & 0.903 & 0.886 \\ Edge boxes & 0.828 & 0.805 & 0.819 & 0.895 & 0.909 & 0.902 \\ \hline \hline \end{tabular}
\end{table}
Table 2: F1 scores from the image feature ablation test.
We also test the effectiveness of the edge box RoI pooling by an image feature ablation test, where the baseline is the model with all image features removed, compared against ones with node box RoI pooling and edge box RoI pooling. Table 2 shows that node box RoI does not help at all, even with a slight accuracy drop compared with the baseline. These results confirm our previous hypothesis that the image backbone mainly helps the model by discovering visual cues out of text bounding boxes, and edge boxes are much more effective for this purpose.
Finally, we measure the normalized Levenshtein distance for reading order produced by the GCN and the cluster-and-sort algorithm, and compare it against the two baseline methods in subsection 4.3. As in Table 3, our algorithm can greatly improve reading order quality across all Latin languages, even though the training data is only available in English. The model also works well for examples out of our datasets. Figure 9 includes images from various sources, demon
Figure 10: Comparison between the fully-connected graph model and our approach on two receipt examples. The full graph predictions perform well on the sparse example, but fail on the dense one.
\begin{table}
\begin{tabular}{l|c|c|c|c|c} \hline \hline Language & Training & Test set & All-column-wise & Fully-connected & 2-task GCN \\ & set size & size & baseline & graph baseline & cluster-and-sort \\ \hline English & 25K & 261 & 0.146 & 0.126 & 0.098 \\ French & & 218 & 0.184 & 0.144 & 0.119 \\ Italian & & 189 & 0.172 & 0.145 & 0.122 \\ German & & 196 & 0.186 & 0.162 & 0.112 \\ Spanish & n/a & 200 & 0.183 & 0.103 & 0.097 \\ Russian & & 1003 & 0.202 & 0.159 & 0.148 \\ Hindi & & 990 & 0.221 & 0.181 & 0.152 \\ Thai & & 951 & 0.131 & 0.111 & 0.104 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Normalized Levenshtein distance (lower is better) on a multi-language reading order evaluation set. Training data is only available in English.
starting the effectiveness of our model with inputs ranging from digital/scanned documents to scene images.
## 5 Conclusions and Future Work
We show that GCN is highly efficient at predicting reading order patterns and various layout segmentation tasks, which is further enhanced with a small image backbone providing edge RoI pooled signals. Our model is small in size and generalizes well enough to be deployable on any platform to improve OCR quality or downstream applications.
In addition, the GCN model has the potential to handle more than two tasks. We tried an extra edge prediction task trained with a dataset of menu photos with section level polygon annotations. Unlike general document or scene text images, menus like Figure 3 usually have clearly defined sections like main dishes, side dishes, drinks, etc. Therefore, the menu dataset has accurate and consistent section level ground truth for model training. The 3-task GCN model provides higher-level layout information to the clustering algorithm and helps produce Figure 11 (b), a major improvement on reading order. Still, there is domain specific knowledge on menu sections that does not always generalize well. And because most evaluation examples have relatively simple layouts, the 3-task model has not produced better results than the 2-task model in our experiments. Nevertheless, we think section level ground truth or higher-level layout structural information will be valuable for further reading order improvements. Future work will explore the possibilities of both data and modeling approaches for parsing layout structures.
Figure 11: A multi-section example. (a) Paragraphs with row-wise pattern are clustered into overly large regions, causing incorrect cross-section reading order. (b) With section level clusters (shown in orange) added into Algorithm 1, multi-table results can be improved.
#### Acknowledgements
The authors would like to thank Ashok C. Popat and Chen-Yu Lee for their valuable reviews and feedback.
| OCRエンジン出力におけるテキスト読み順の重要性は非常に高いであり、 downstreamタスクに大きな影響を与えます。その難しさは、特定の領域のレイアウト構造の多様性と、現実世界の画像劣化 such as 視点歪みによる悪化にあります。私たちは、多岐に渡る多様なタスクを処理できる軽量、スケーラブルで一般化可能な方法を提案しました。それは、スパースレイアウトに基づいたグラフ上で実行される多モデ、マルチタスクグラフ畳み込みニューラルネットワーク(GCN)です。このモデルの予測は、テキスト行とレイアウト領域構造の間の2次元的な関係を示しています。これに基づいて、後処理のクラスタリングとソートアルゴリズムは、すべてのテキスト行の順番を生成します。このモデルは、言語に依存せず、様々な画像のマルチ言語データセットを処理し、不規則な条件下で撮影された画像を含みます。 |
2303.05196 | Covering games using semi-open sets | In this paper, we prove the following Theorems 1. An extremally disconnected
space $X$ has the semi-Menger property if and only if One does not have a
winning strategy in the game $G_{fin}(sO,sO)$. 2. An extremally disconnected
space $X$ has the semi-Rothberger property if and only if One does not have a
winning strategy in the game $G_1(sO,sO)$. | Manoj Bhardwaj, Alexander V. Osipov | 2023-03-09T11:51:21 | http://arxiv.org/abs/2303.05196v1 | # Covering games using semi-open sets
###### Abstract
In this paper, we prove the following Theorems
1. An extremally disconnected space \(X\) has the semi-Menger property if and only if One does not have a winning strategy in the game \(G_{fin}(s\mathcal{O},s\mathcal{O})\).
2. An extremally disconnected space \(X\) has the semi-Rothberger property if and only if One does not have a winning strategy in the game \(G_{1}(s\mathcal{O},s\mathcal{O})\).
These results answer the Problem 3.7 of [1] and Problem 3.9 of [2].
keywords: The Semi-Menger game, The Semi-Rothberger game, selection principles Msc: [2010] 54D20, 54B20 +
Footnote †: journal:...
## 1 Introduction
The study of topological properties via various changes is not a new idea in topological spaces. The study of selection principles in topology and their relations to game theory and Ramsey theory was started by Scheepers [4] (see also [8]). In the last two decades it has gained the enough importance to become one of the most active areas of set theoretic topology.
In 1924, Menger [3] (see also [7; 4]) introduced Menger property in topological spaces and studied it. This property is stronger than Lindelofness and weaker than \(\sigma\)-compactness.
Topological games form a major tool in the study of topological properties and their relations to Ramsey theory, forcing, function spaces, and other related topics. At the heart of the theory of selection principles, covering properties are defined by the ability to diagonalize, in canonical ways, sequences of open covers. Each of these covering properties has an associated two-player game. Often the nonexistence of a winning strategy for the first player in the associated game is equivalent to the original property, and this forms a strong tool for establishing results concerning the original property. We present here conceptual proofs of these type of theorems.
This paper is organized as follows. In section-2, the definitions of the terms used in this paper are provided. Section-3 deals with study of the semi-Menger game and the semi-Rothberger game.
## 2 Preliminaries
Let \((X,\tau)\) or \(X\) be a topological space. We will denote by \(Cl(A)\) and \(Int(A)\) the closure of \(A\) and the interior of \(A\), for a subset \(A\) of \(X\), respectively. Throughout this paper, \(X\) stands for topological space and the cardinality of a set \(A\) is denoted by \(|A|\). Let \(\omega\) be the first infinite cardinal and \(\omega_{1}\) the first uncountable cardinal. The basic definitions are given.
Let \(\mathcal{A}\) and \(\mathcal{B}\) be collections of open covers of a topological space \(X\).
The symbol \(S_{1}(\mathcal{A},\mathcal{B})\) denotes the selection hypothesis that for each sequence \(<\mathcal{U}_{n}:n\in\omega>\) of elements of \(\mathcal{A}\) there exists a sequence \(<U_{n}:n\in\omega>\) such that for each \(n\), \(U_{n}\in\mathcal{U}_{n}\) and \(\{U_{n}:n\in\omega\}\in\mathcal{B}\)[4].
The symbol \(S_{fin}(\mathcal{A},\mathcal{B})\) denotes the selection hypothesis that for each sequence \(<\mathcal{U}_{n}:n\in\omega>\) of elements of \(\mathcal{A}\) there exists a sequence \(<\mathcal{V}_{n}:n\in\omega>\) such that for each \(n\), \(\mathcal{V}_{n}\) is a finite subset of \(\mathcal{U}_{n}\) and \(\bigcup_{n\in\omega}\mathcal{V}_{n}\) is an element of \(\mathcal{B}\)[4].
A subset \(A\) of a topological space \(X\) is said to be semi-open [5] if \(A\subseteq Cl(Int(A)))\).
In this paper \(\mathcal{A}\) and \(\mathcal{B}\) will be the collections of the following open covers of a space \(X\):
\(\mathcal{O}\) : the collection of all open covers of \(X\),
\(s\mathcal{O}\) : the collection of all semi-open covers of \(X\).
**Definition 2.1**.: [3] A space \(X\) is said to have _Menger property_ if \(X\) satisfies \(S_{fin}(\mathcal{O},\mathcal{O})\).
A space \(X\) is said to have _semi-Rothberger property_[1] if \(X\) satisfies \(S_{1}(s\mathcal{O},s\mathcal{O})\).
In [1], the authors asked the following problem as Problem 3.7.
**Problem 2.2**.: Can semi-Rothbergerness be characterized game-theoretically or Ramsey-theoretically?
**Definition 2.3**.: [2] A space \(X\) is said to have _semi-Menger property_ if for each sequence \(<\mathcal{U}_{n}:n\in\omega>\) of semi-open covers of \(X\) there is a sequence \(<\mathcal{V}_{n}:n\in\omega>\) such that for each \(n\), \(\mathcal{V}_{n}\) is a finite subset of \(\mathcal{U}_{n}\) and each \(x\in X\) belongs to \(\bigcup\mathcal{V}_{n}\) for some \(n\), i.e., \(X\) satisfies \(S_{fin}(s\mathcal{O},s\mathcal{O})\).
In [2], the authors asked the following problem as Problem 3.9.
**Problem 2.4**.: Can semi-Mengerness be characterized game-theoretically or Ramsey-theoretically?
## 3 The semi-Menger Game
The semi-Menger game \(G_{fin}(s\mathcal{O},s\mathcal{O})\) is a game for two players, Alice and Bob, with an inning per each natural number \(n\). In each inning, Alice picks a semi-open cover of the space and Bob selects finitely many members from this cover. Bob wins if the sets he selected throughout the game cover the space. If this is not the case, Alice wins.
If Alice does not have a winning strategy in the game \(G_{fin}(s\mathcal{O},s\mathcal{O})\), then \(S_{fin}(s\mathcal{O},s\mathcal{O})\) holds. The converse implication is a deep investigation. We prove it with simplification that makes calculations easier, and an appropriate notion that goes through induction, and thus eliminates the necessity to track the history of the game.
**Definition 3.1**.: A countable cover \(\mathcal{U}\) of a space \(X\) is a tail cover if the set of intersections of cofinite subsets of \(\mathcal{U}\) is an open cover of \(X\). Equivalently, a cover \(\{U_{1},U_{2},...\}\) is a tail cover if the family
\[\{\bigcap_{1\leq n\leq\infty}U_{n},\bigcap_{2\leq n\leq\infty}U_{n},...\}\]
of intersections of cofinal segments of the cover is an open cover.
A tail cover is said to be s-tail cover or tail semi cover if the elements of the cover are semi-open sets.
Recall that a space \(X\) is semi-Lindelof if every semi-open cover has a countable subcover.
It is clear that each semi-Menger space is semi-Lindelof.
**Definition 3.2**.: ([9]) A Hausdorff space \(X\) is called a _Luzin space_ (_in the sense of Kunen_) if
(a) Every nowhere dense set in \(X\) is countable;
(b) \(X\) has at most countably many isolated points;
(c) \(X\) is uncountable.
By Corollary 2.5 in [10], if \(X\) is an uncountable Hausdorff space then \(X\) is semi-Lindeloff if and only if \(X\) is a Luzin space in the sense of Kunen.
Let us note however that Kunen (Theorem 0.0. in [9]) has shown that under _Suslin's Hypothesis_ (**SH**) there are no Lusin spaces at all. K.Kunen proved that under \(\textbf{MA}(\aleph_{1},\aleph_{0}\)-centred) there is a Lusin space if and only if there is a Suslin line.
Since a Lusin space \(X\) is hereditarily Lindelof and Hausdorff, it has cardinality at most \(\mathfrak{c}=2^{\omega}\) (de Groot, [11]).
Thus, further, we consider only hereditarily Lindelof Hausdorff spaces and assume that **SH** not hold.
A space \(X\) is an extremally disconnected space [6] if closure of every open set is open. It is also known that in an extremally disconnected space \(X\), the collection of semi-open sets is a topology on \(X\).
**Theorem 3.3**.: _Let \(X\) be an extremally disconnected space satisfying \(S_{fin}(s\mathcal{O},s\mathcal{O})\) space. Then Alice does not have a winning strategy in the game \(G_{fin}(s\mathcal{O},s\mathcal{O})\)._
Proof.: Let \(\sigma\) be an arbitrary strategy for Alice in the game \(G_{fin}(s\mathcal{O},s\mathcal{O})\). If Bob covers the space after finitely many steps in a play, then we are done. Otherwise Bob cannot cover space after finitely many steps in each play. Thus, we assume that, in no position, a finite selection suffices, together with the earlier selections, to cover the space. Since the space \(X\) satisfies \(S_{fin}(s\mathcal{O},s\mathcal{O})\), it is semi-Lindelof, we assume every semi-open cover of \(X\) is countable. By restricting Bob's moves to countable subcovers of Alice's semi-open covers, we may assume that Alice's covers are countable. So we can enumerate these semi-open covers as \(\{U_{1},U_{2},...\}\) and assume \(\{U_{1},\subseteq U_{2},\subseteq...\}\), that is, Alice's semi-open covers are increasing and Bob selects a single set in each move from the increasing semi-open cover \(\{U_{1},\subseteq U_{2},\subseteq...\}\). Indeed, given a countable semi-open cover \(\{U_{1},U_{2},...\}\), we can restrict Bob's selections to the form \(\{U_{1},\subseteq U_{2},\subseteq...\}\), for \(n\in\omega\). Since Bob's goal is just to cover the space, we may pretend that Bob is provided covers of the form \(\{U_{1},U_{1}\cup U_{2},U_{1}\cup U_{2}\cup U_{3},...\}\) or making cover of space, that is, Bob selects an
element \(U_{1}\cup...\cup U_{n}\), he replies to Alice with the legal move \(\{U_{1},U_{2},...,U_{n}\}\). Finally, we assume that for each reply \(\{U_{1},U_{2},...\}\)(with \(U_{1}\subseteq U_{2}\subseteq...\)) of Alice's strategy to a move \(U\), we have \(U=U_{1}\). Indeed, we can transform the given semi-open cover into the semi-open cover \(\{U,U\cup U_{1},U\cup U_{2},...\}\). If Bob chooses \(U\), we provide Alice with the answer \(U_{1}\), and if he chooses \(U\cup U_{n}\), we provide Alice with the answer \(U_{n}\). Since Bob has already chosen the set \(U\), its addition in the new strategy does not help covering more points. With these simplifications, Alice's strategy can be identified with a tree of semi-open sets, as follows:
Alice's initial move is a semi-open cover
\(\sigma(<>)=\mathcal{U}_{1}=\{U_{1},U_{2},...\}\);
Bob replies with \(U_{k_{1}}=U_{\sigma(1)}\);
then Alice's move is an increasing cover
\(\sigma(U_{\sigma(1)})=\mathcal{U}_{\sigma(1)}=\{U_{\sigma(1),1},U_{\sigma(1),2},...\}\);
Bob replies with \(U_{\sigma(1),k_{2}}=U_{\sigma(1),\sigma(2)}\);
then Alice's move is an increasing cover
\(\sigma(U_{\sigma(1),\sigma(2)})=\mathcal{U}_{\sigma(1),\sigma(2)}=\{U_{ \sigma(1),\sigma(2),1},U_{\sigma(1),\sigma(2),2},...\}\);
Bob replies with \(U_{\sigma(1),\sigma(2),k_{3}}=U_{\sigma(1),\sigma(2),\sigma(3)}\);
\(\cdot\)
\(\cdot\)
if Bob replies with \(U_{\sigma}\), for \(\sigma\in\mathbb{N}^{m}\),
then Alice's move is an increasing semi-open cover
\(\sigma(U_{\sigma(1),\sigma(2),...,\sigma(m)})=\mathcal{U}_{\sigma(1),\sigma(2 ),...,\sigma(m)}=\)
\(\{U_{\sigma(1),\sigma(2),...,\sigma(m),1},U_{\sigma(1),\sigma(2),...,\sigma(m ),2},...\}\);
Now to show \(\mathcal{V}_{n}=\bigcup_{\sigma\in\mathbb{N}^{m}}\mathcal{U}_{\sigma}\) is a tail semi-open cover of \(X\). We will show it by induction on \(n\). The semi-open cover \(\mathcal{V}_{1}=\mathcal{U}_{<>}\) is increasing, and thus the set of cofinite intersections is again \(\mathcal{V}_{1}\), an semi-open cover of \(X\), that is, a s-tail cover of \(X\). Let \(n\) be a natural number and assume it for \(n\), that is, \(\mathcal{V}_{n}\) is a tail semi-open cover of \(X\). As
\(\mathcal{V}_{n}=\bigcup_{\sigma\in\mathbb{N}^{m}}\mathcal{U}_{\sigma}\)
\(=\bigcup_{\sigma(i)\in\omega}\mathcal{U}_{\sigma(1),\sigma(2),...,\sigma(m)}\),
that is, countable union of countable sets since each \(\sigma(i)\) has countable infinite choices.
Since \(\mathcal{V}_{n}\) is countable, we enumerate
\[\begin{array}{c}\mathcal{V}_{n}=\{V_{1},V_{2},...,V_{n},...\}\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad
provides \(V_{n}\in\mathcal{G}_{n}\cap\mathcal{W}_{n}\) and \(\bigcap\mathcal{W}_{n}\subseteq V_{n}\). Then \(X=\bigcup_{n\in\omega}V_{n}\) and Bob wins. This completes the proof.
For semi-Rothberger game, we need a result slightly stronger than above Theorem. For that we first prove that semi-Menger property \(S_{fin}(s\mathcal{O},s\mathcal{O})\) is preserved by countable unions: Given a countable union of semi-Menger spaces, and a sequence of semi-open covers, we can split the sequence of covers into infinitely many disjoint subsequences, and use each subsequence to cover one of the given semi-Menger spaces.
**Theorem 3.4**.: _The semi-Menger property \(S_{fin}(s\mathcal{O},s\mathcal{O})\) is preserved by countable unions._
Proof.: Let \(\{X_{k}:k\in\omega\}\) be a family of subspaces having semi-Menger property in a space \(X\) and \(\langle\mathcal{U}_{n}:n\in\omega\rangle\) be a sequence of semi-open covers of \(X\). For each \(k\in\omega\), consider the sequence \(\langle\mathcal{U}_{n}:n\geq k\rangle\). For each \(k\in\omega\), since \(X_{k}\) is semi-Menger, there is a sequence \(\langle\mathcal{V}_{n,k}:n\geq k\rangle\) such that for each \(n\geq k\), \(\mathcal{V}_{n,k}\) is a finite subset of \(\langle\mathcal{U}_{n}:n\geq k\rangle\) and \(\bigcup_{n\geq k}\mathcal{V}_{n,k}\supseteq X_{k}\). For each \(n\), let
\[\mathcal{V}_{n}=\bigcup\{\mathcal{V}_{n,j}:j\leq n\}.\]
Then each \(\mathcal{V}_{n}\) is a finite subset of \(\mathcal{U}_{n}\). Then for each \(x\in X\), \(x\in\bigcup X_{k}\). There exists \(k\in\omega\) such that \(x\in X_{k}\). Thus, \(\bigcup_{n\geq k}\mathcal{V}_{n,k}\supseteq X_{k}\). Then \(\bigcup_{n\geq k}\mathcal{V}_{n,k}\subseteq\bigcup_{n\geq k}\mathcal{V}_{n}\) and hence \(x\in\bigcup\mathcal{V}_{n}\). This completes the proof.
**Theorem 3.5**.: _Let \(X\) be a space satisfying \(S_{fin}(s\mathcal{O},s\mathcal{O})\). For each strategy for Alice in the game \(G_{fin}(s\mathcal{O},s\mathcal{O})\), there is a play according to this strategy,_
\[(\mathcal{U}_{1},\mathcal{V}_{1},\mathcal{U}_{2},\mathcal{V}_{2},...)\text{,}\]
_such that for each point \(x\in X\) we have \(x\in\bigcup\mathcal{V}_{n}\) for infinitely many \(n\)._
Proof.: The product space \(X\times\mathbb{N}\), a countable union of semi-Menger spaces, satisfies \(S_{fin}(s\mathcal{O},s\mathcal{O})\). We define a strategy for Alice in the game \(G_{fin}(s\mathcal{O},s\mathcal{O})\), played on the space \(X\times\mathbb{N}\). Let \(U\) be Alice's first move in the original game. Then, in the new game, her first move is
\[\mathcal{U}^{{}^{\prime}}=\{U\times\{n\}:U\in\mathcal{U},n\in\omega\}.\]
If Bob selects a finite set \(\mathcal{V}^{{}^{\prime}}\subseteq\mathcal{U}^{{}^{\prime}}\), we take the set
\[\mathcal{V}=\{U\in\mathcal{U}:\]
there is
\[n\]
with
\[U\times\{n\}\in\mathcal{V}^{{}^{\prime}}\}\]
as a move in the original game. Then Alice replies with a cover \(\mathcal{V}\), and we continue in the same manner. By above Theorem, there is a play
\[(\mathcal{U}_{1}^{{}^{\prime}},\mathcal{V}_{1}^{{}^{\prime}},\mathcal{U}_{2}^{{} ^{\prime}},\mathcal{V}_{2}^{{}^{\prime}},...)\]
in the new game, with \(\bigcup_{n\in\omega}\mathcal{V}_{n}^{{}^{\prime}}\) a cover of \(X\times\mathbb{N}\). Consider the corresponding play in the original strategy,
\[(\mathcal{U}_{1},\mathcal{V}_{1},\mathcal{U}_{2},\mathcal{V}_{2},...).\]
Let \(x\in X\). There is a natural number \(n_{1}\) with \((x,1)\in\bigcup\mathcal{V}_{n_{1}}^{{}^{\prime}}\). Then \(x\in\bigcup\mathcal{V}_{n_{1}}^{{}^{\prime}}\). The set
\[V=\{k\in\omega:\text{there is }U\text{ with }U\times\{k\}\in\bigcup_{i\leq n_{1}} \mathcal{V}_{i}^{{}^{\prime}}\}\]
is finite. Let \(m\) be a natural number greater than all elements of the set \(V\). There is a natural number \(n_{2}\) with \((x,m)\in\bigcup\mathcal{V}_{n_{2}}^{{}^{\prime}}\). Then \(x\in\bigcup\mathcal{V}_{n_{2}}^{{}^{\prime}}\), and \(n_{1}<n_{2}\). Continuing in a similar manner, we see that \(x\in\bigcup\mathcal{V}_{n}\) for infinitely many \(n\).
## 4 The Semi-Rothberger Game
The definitions of semi-Rothberger property \(S_{1}(s\mathcal{O},s\mathcal{O})\) and the corresponding game \(G_{1}(s\mathcal{O},s\mathcal{O})\) are similar to those of \(S_{fin}(s\mathcal{O},s\mathcal{O})\) and \(G_{fin}(s\mathcal{O},s\mathcal{O})\), respectively, but here we select one element from each cover. Here too, if Alice does not have a winning strategy then the space satisfies \(S_{1}(s\mathcal{O},s\mathcal{O})\). The converse implication will be proved in the next results.
For that we need the following lemma.
**Lemma 4.1**.: _Let \(X\) be an extremally disconnected space satisfying \(S_{1}(s\mathcal{O},s\mathcal{O})\). Let \(\mathcal{V}_{1},\mathcal{V}_{2},...\) be nonempty finite families of semi-open sets such that, for each point \(x\in X\), we have \(x\in\bigcup\mathcal{V}_{n}\) for infinitely many \(n\). Then there are elements \(U_{1}\in\mathcal{V}_{1},U_{2}\in\mathcal{V}_{2},...\) such that the family \(\{U_{1},U_{2},...\}\) covers the space \(X\)._
Proof.: For each \(n\), let \(\mathcal{U}_{n}\) be the family of all intersections of \(n\) semi-open sets taken from distinct members of the sequence \(\mathcal{V}_{1},\mathcal{V}_{2},...\). Then \(\mathcal{U}_{n}\) is a semi-open cover of \(X\) for each \(n\). Since \(X\) satisfies \(S_{1}(s\mathcal{O},s\mathcal{O})\), there are semi-open sets \(V_{1}\in\mathcal{U}_{1},V_{2}\in\mathcal{U}_{2},...\) such that \(\{V_{1},V_{2},...\}\) covers the space \(X\). As \(V_{1}\in\mathcal{U}_{1}\), \(V_{1}\) is a member of \(\mathcal{V}_{n}\) for some \(n\). For \(V_{2}\in\mathcal{U}_{2}\), \(V_{2}\) is the intersection of two semi-open sets from distinct \(\mathcal{V}_{i}\). We can extend \(V_{2}\) to an element of some other family \(\mathcal{V}_{m}\) and so on. From this process we obtain a
selection of at most one element from each family \(\mathcal{V}_{i}\) that covers \(X\). So we can extend our selection to have an element from each family \(\mathcal{V}_{i}\).
For a natural number \(k\) and families of sets \(\mathcal{U}_{1},...,\mathcal{U}_{k}\), let
\[\mathcal{U}_{1}\wedge\mathcal{U}_{2}\wedge...\wedge\mathcal{U}_{k}=\{U_{1} \cap U_{2}\cap...\cap U_{k}:U_{1}\in\mathcal{U}_{1},...,U_{k}\in\mathcal{U}_{k}\}.\]
**Theorem 4.2**.: _Let \(X\) be an extremally disconnected space satisfying \(S_{1}(s\mathcal{O},s\mathcal{O})\). Then Alice does not have a winning strategy in the game \(G_{1}(s\mathcal{O},s\mathcal{O})\)._
Proof.: Fix an arbitrary strategy for Alice in the semi-Rothberger game \(G_{1}(s\mathcal{O},s\mathcal{O})\). Since \(S_{1}(s\mathcal{O},s\mathcal{O})\) spaces are semi-Lindelof, we may assume that each semi-open cover in the strategy is countable. Let \(\mathbb{N}^{<\infty}\) be the set of finite sequences of natural numbers. We index the semi-open covers in the strategy as
\[\mathcal{U}_{\sigma}=\{U_{\sigma,1},U_{\sigma,2},...\},\]
for \(\sigma\in\mathbb{N}^{<\infty}\), so that \(\mathcal{U}=\{U_{1},U_{2},...\}\) is Alice's first move, and for each finite sequence \(k_{1},k_{2},...,k_{n}\) of natural numbers, \(\mathcal{U}_{k_{1},k_{2},k_{3},...,k_{n}}\) is Alice's reply to the position
\[(\mathcal{U},U_{k_{1}},\mathcal{U}_{k_{1}},U_{k_{1},k_{2}},\mathcal{U}_{k_{1},k_{2}},...,U_{k_{1},k_{2},...,k_{n}}).\]
For finite sequences \(\tau,\sigma\in\mathbb{N}^{n}\), we write \(\tau\leq\sigma\) if \(\tau(i)\leq\sigma(i)\) for all \(i=1,2,...,n\). We define a strategy for Alice in the semi-Menger game \(G_{fin}(s\mathcal{O},s\mathcal{O})\). Alice's first move is \(\mathcal{U}\), her first move in the original strategy. Assume that Bob selects a finite subset \(\mathcal{F}\) of \(\mathcal{U}\). Let \(m_{1}\) be the minimal natural number with \(\mathcal{F}\subseteq\{U_{1},U_{2},...,U_{m_{1}}\}\). Then, in the semi-Menger game, Alice's response is the joint refinement \(\mathcal{U}_{1}\wedge\mathcal{U}_{2}\wedge...\wedge\mathcal{U}_{m_{1}}\). Assume that Bob chooses a finite subset \(\mathcal{F}\) of this refinement. Let \(m_{2}\) be the minimal natural number such that \(\mathcal{F}\) refines all sets \(\{U_{i,1},U_{i,2},...,U_{i,m_{2}}\}\), for \(i=1,2,...,m_{1}\). Then Alice's reply is the joint refinement \(\bigwedge_{\tau\leq(m_{1},m_{2})}\mathcal{U}_{\tau}\). In general, Alice provides a semi-open cover of the form \(\bigwedge_{\tau\leq\sigma}\mathcal{U}_{\tau}\) for \(\sigma\in\mathbb{N}^{<\infty}\), Bob selects a finite family refining all families \(\{U_{\tau,1},U_{\tau,2},...,U_{\tau,m}\}\) for \(\tau\leq\sigma\), with the minimal natural number \(m\), and Alice replies \(\bigwedge_{\tau\leq(\sigma,m)}\mathcal{U}_{\tau}\).
Now by Theorem 3.5, there is a play
\[(\mathcal{U},\mathcal{F}_{1},\bigwedge_{k_{1}\leq m_{1}}\mathcal{U}_{k_{1}}, \mathcal{F}_{2},\bigwedge_{(k_{1},k_{2})\leq(m_{1},m_{2})}\mathcal{U}_{k_{1}, k_{2}},...),\]
according to the new strategy, such that every point of the space is covered infinitely in the sequence \(\bigcup\mathcal{F}_{1},\bigcup\mathcal{F}_{2},...\). By Lemma 4.1, we can pick one element from each set \(\mathcal{F}_{n}\) and cover the space. There is \(k_{1}\leq m_{1}\) such that
the first picked element is a subset of \(U_{k_{1}}\). There is \(k_{2}\leq m_{2}\) such that the second picked element is a subset of \(U_{k_{1},k_{2}}\), and so on. Then the play
\[(\mathcal{U},U_{k_{1}},\mathcal{U}_{k_{1}},U_{k_{1},k_{2}},...)\]
is in accordance with Alice's strategy in the semi-Rothberger game, and is won by Bob.
| この論文では、以下の定理を証明します。1. EXTREMELY DISCONNECTED 空間 $X$ は Semi-Menger 定理が成り立つ場合と、同じである限り、ゲーム $G_{fin}(sO,sO)$ に対する勝利戦略を持たない。2. EXTREMELY DISCONNECTED 空間 $X$ は Semi-Rothberger 定理が成り立つ場合と、同じである限り、ゲーム $G_1(sO,sO)$ に対する勝利戦略を持たない。
Please let me know if you would like to see more translations! |
2306.00630 | Class Anchor Margin Loss for Content-Based Image Retrieval | The performance of neural networks in content-based image retrieval (CBIR) is
highly influenced by the chosen loss (objective) function. The majority of
objective functions for neural models can be divided into metric learning and
statistical learning. Metric learning approaches require a pair mining strategy
that often lacks efficiency, while statistical learning approaches are not
generating highly compact features due to their indirect feature optimization.
To this end, we propose a novel repeller-attractor loss that falls in the
metric learning paradigm, yet directly optimizes for the L2 metric without the
need of generating pairs. Our loss is formed of three components. One leading
objective ensures that the learned features are attracted to each designated
learnable class anchor. The second loss component regulates the anchors and
forces them to be separable by a margin, while the third objective ensures that
the anchors do not collapse to zero. Furthermore, we develop a more efficient
two-stage retrieval system by harnessing the learned class anchors during the
first stage of the retrieval process, eliminating the need of comparing the
query with every image in the database. We establish a set of four datasets
(CIFAR-100, Food-101, SVHN, and Tiny ImageNet) and evaluate the proposed
objective in the context of few-shot and full-set training on the CBIR task, by
using both convolutional and transformer architectures. Compared to existing
objective functions, our empirical evidence shows that the proposed objective
is generating superior and more consistent results. | Alexandru Ghita, Radu Tudor Ionescu | 2023-06-01T12:53:10 | http://arxiv.org/abs/2306.00630v2 | # Class Anchor Margin Loss for Content-Based Image Retrieval
###### Abstract.
The performance of neural networks in content-based image retrieval (CBIR) is highly influenced by the chosen loss (objective) function. The majority of objective functions for neural models can be divided into metric learning and statistical learning. Metric learning approaches require a pair mining strategy that often lacks efficiency, while statistical learning approaches are not generating highly compact features due to their indirect feature optimization. To this end, we propose a novel repeller-attractor loss that falls in the metric learning paradigm, yet directly optimizes for the \(L_{2}\) metric without the need of generating pairs. Our loss is formed of three components. One leading objective ensures that the learned features are attracted to each designated learnable class anchor. The second loss component regulates the anchors and forces them to be separable by a margin, while the third objective ensures that the anchors do not collapse to zero. Furthermore, we develop a more efficient two-stage retrieval system by harnessing the learned class anchors during the first stage of the retrieval process, eliminating the need of comparing the query with every image in the database. We establish a set of four datasets (CIFAR-100, Food-101, SVHN, and Tiny ImageNet) and evaluate the proposed objective in the context of few-shot and full-set training on the CBIR task, by using both convolutional and transformer architectures. Compared to existing objective functions, our empirical evidence shows that the proposed objective is generating superior and more consistent results.
2023
Contrastive learning, contrastive loss, class anchors, content-based image retrieval, object retrieval +
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote † †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: thanks: [leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: thanks: [leftmargin=*]
+
Footnote †: thanks: [leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: thanks: [leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: thanks: [leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: thanks: [leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: thanks: [leftmargin=*]
+
Footnote †: thanks: [leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: thanks: [leftmargin=*]
+
Footnote †: thanks: [leftmargin=*]
+
Footnote †: thanks: [leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: thanks: [leftmargin=*]
+
Footnote †: thanks: [leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: thanks: [leftmargin=*]
+
Footnote †: leftmargin=*]
+
Footnote †: thanks: [leftmargin=*]
+
Footnote †: thanks: [leftmargin=*]
with image embeddings assigned to the nearest class anchors. By harnessing the learned class anchors, our retrieval process becomes more efficient and effective than the brute-force search.
We carry out experiments on four datasets (CIFAR-100 (CIFAR-100, 2009), Food-101 (Friedman et al., 2010), SVHN (Krizhevsky et al., 2012), and Tiny ImageNet (Vaswani et al., 2017)) to compare the proposed loss function against representative statistical and deep metric learning objectives. We evaluate the objectives in the context of few-shot and full-set training on the CBIR task, by using both convolutional and transformer architectures, such as residual networks (ResNets) (Krizhevsky et al., 2012) and shifted windows (Swin) transformers (Krizhevsky et al., 2012). Moreover, we test the proposed losses on various embedding space dimensions, ranging from 32 to 2048. Compared to existing loss functions, our empirical results show that the proposed objective is generating higher and more consistent performance levels across the considered evaluation scenarios. Furthermore, we conduct an ablation study to demonstrate the influence of each loss component on the overall performance.
In summary, our contribution is threefold:
* We introduce a novel repeller-attractor objective function that directly optimizes for the \(L_{2}\) metric, alleviating the need to generate pairs via hard example mining or alternative mining strategies.
* We propose a two-stage retrieval system that leverages the use of the learned class anchors in the first stage of the retrieval process, leading to significant speed and performance gains.
* We conduct comprehensive experiments to compare the proposed loss function with popular loss choices in multiple evaluation scenarios.
## 2. Related Work
As related work, we refer to studies introducing new loss functions, which are related to our first contribution, and new content-based image retrieval methods, which are connected to our second contribution.
**Loss functions.** The problem of generating features that are tightly clustered together for images representing the same objects and far apart for images representing distinct objects is a challenging task usually pursued in the context of CBIR. For retrieval systems based on neural networks, the choice of the objective function is the most important factor determining the geometry of the resulting feature space (Zhu et al., 2017). We hereby discuss related work proposing various loss functions aimed at generating effective embedding spaces.
Metric learning objective functions directly optimize a desired metric and are usually based on pairs or tuples of known data samples (Beng et al., 2016; Liu et al., 2017; Wang et al., 2017; Wang et al., 2017). One of the earliest works on metric learning proposed the contrastive loss (Krizhevsky et al., 2012), which was introduced as a method to reduce the dimensionality of the input space, while preserving the separation of feature clusters. The idea behind contrastive loss is to generate an attractor-repeller system that is trained on positive and negative pairs generated from the available data. The repelling can happen only if the distance between a negative pair is smaller than a margin \(m\). In the context of face identification, another successful metric learning approach is triplet loss (Wang et al., 2017). It obtains the desired properties of the embedding space by generating triplets of anchor, positive and negative examples. Each image in a batch is considered an anchor, while positive and negative examples are selected online from the batch. For each triplet, the proposed objective enforces the distance between the anchor and the positive example to be larger than the distance between the anchor and the negative example, by a margin \(m\). Other approaches introduced objectives that directly optimize the AUC (Chen et al., 2018), recall (Liu et al., 2018) or AP (Krizhevsky et al., 2012). The main issues when optimizing with loss functions based on metric learning are the usually slow convergence (Wang et al., 2017) and the difficulty of generating useful example pairs or tuples (Wang et al., 2017). In contrast, our method does not require mining strategies and, as shown in the experiments, it converges much faster than competing losses.
The usual example mining strategies are hard, semi-hard, and online negative mining (Wang et al., 2017; Wang et al., 2017). In hard negative mining, for each anchor image, we need to construct pairs with the farthest positive example and the closest negative example. This adds an extra computational step at the beginning of every training epoch, extending the training time. Similar problems arise in the context of semi-hard negative mining, while the difference consists in the mode in which the negatives are sampled. Instead of generating pairs with the farthest example, one can choose a negative example that is slightly closer than a positive sample, thus balancing hardness and stability. A more efficient approach is online negative mining, where negative examples are sampled during training in each batch. In this approach, the pair difficulty adjusts while the model is training, thus leading to a more efficient strategy, but the main disadvantage is that the resulting pairs may not be the most challenging for the model, due to the randomness of samples in a batch.
Statistical learning objective functions indirectly optimize the learned features of the neural network. Popular objectives are based on some variation of the cross-entropy loss (Chen et al., 2018; Chen et al., 2018; Chen et al., 2018; Chen et al., 2018; Wang et al., 2017; Wang et al., 2017) or the cosine loss (Chen et al., 2018). By optimizing such functions, the model is forced to generate features that are close to the direction of the class center. For example, ArcFace (Chen et al., 2018) reduces the optimization space to an \(n\)-dimensional hypersphere by normalizing both the embedding generated by the encoder, and the corresponding class weight from the classification head, using their Euclidean norm. An additive penalty term is introduced, and with the proposed normalization, the optimization is performed on the angle between each feature vector and the corresponding class center.
Hybrid objective functions promise to obtain better embeddings by minimizing a statistical objective function in conjunction with a metric learning objective (Krizhevsky et al., 2012; Krizhevsky et al., 2012). For example, Center Loss (Wang et al., 2017) minimizes the intra-class distances of the learned features by using cross-entropy in conjunction with an attractor for each sample to its corresponding class center. During training, the class centers are updated to become the mean of the feature vectors for every class seen in a batch. Another approach (Wang et al., 2017) similar to Center Loss (Wang et al., 2017) is based on predefined evenly distributed class centers. A more complex approach (Chen et al., 2018) is to combine the standard cross-entropy with a cosine classifier and a mean squared error regression term to jointly enhance global and local features.
Both contrastive and triplet loss objectives suffer from the need of employing pair mining strategies, but in our case, mining strategies are not required. The positive pairs are built online for each batch, between each input feature vector and the dedicated class anchor, the number of positive pairs thus being equal to the number of examples. The negative pairs are constructed only between the
class centers, thus alleviating the need of searching for a good negative mining strategy, while also significantly reducing the number of negative pairs. To the best of our knowledge, there are no alternative objective functions for CBIR that use dedicated self-repelling learnable class anchors acting as attraction poles for feature vectors belonging to the respective classes.
**Content-based image retrieval methods.** CBIR systems are aimed at finding similar images with a given query image, matching the images based on the similarity of their scenes or the contained objects. Images are encoded using a descriptor (or encoder), and a system is used to sort a database of encoded images based on a similarity measure between queries and images. In the context of content-based image retrieval, there are two types of image descriptors. On the one hand, there are general descriptors (Kumar et al., 2017), where a whole image is represented as a feature vector. On the other hand, there are local descriptors (Kumar et al., 2017) where portions of an image are represented as feature vectors. Hybrid descriptors (Beng et al., 2017) are also used to combine both global and local features. To improve the quality of the results retrieved by learned global descriptors, an additional verification step is often employed. This step is meant to re-rank the retrieved images by a precise evaluation of each candidate image (Kumar et al., 2017). The re-ranking step is usually performed with the help of an additional system, and in some cases, it can be directly integrated with the general descriptor (Kumar et al., 2017). In the CBIR task, one can search for visually similar images as a whole, or search for images that contain similar regions (Kumar et al., 2017) of a query image. In this work, we focus on building global descriptors that match whole images. Further approaches based on metric learning, statistical learning, or hand-engineered features are discussed in the recent survey of Dubey et al. (Dubey et al., 2018). Different from other CBIR methods, we propose a novel two-stage retrieval system that leverages the use of the class anchors learned through the proposed loss function to make the retrieval process more efficient and effective.
## 3. Method
**Overview.** Our objective function consists of three components, each playing a different role in obtaining a discriminative embedding space. All three loss components are formulated with respect to a set of learnable class anchors (centroids). The first loss component acts as an attraction force between each input embedding and its corresponding class anchor. Its main role is to draw the embeddings representing the same object to the corresponding centroid, thus creating embedding clusters of similar images. Each center can be seen as a magnet with a positive charge and its associated embeddings as magnets with negative charges, thus creating attraction forces between anchors and data samples of the same class. The second loss component acts as a repelling force between class anchors. In this case, the class centers can be seen as magnets with similar charges. If brought together, they will repel each other, and if they lie at a certain distance, the repelling force stops. The last component acts similarly to the second one, with the difference that an additional magnet is introduced and fixed at the origin of the embedding space. Its main effect is to push the class centers away from the origin.
**Notations.** Let \(\mathbf{x}_{i}\in\mathbb{R}^{h\times w\times c}\) be an input image and \(y_{i}\in\mathbb{N}\) its associated class label, \(\forall i\in\{1,2,...,l\}\). We aim to optimize a neural encoder \(f_{\theta}\) which is parameterized by the learnable weights \(\theta\) to produce a discriminative embedding space. Let \(\mathbf{e}_{i}\in\mathbb{R}^{n}\) be the \(n\)-dimensional embedding vector of the input image \(\mathbf{x}_{i}\) generated by \(f_{\theta}\), i.e. \(\mathbf{e}_{i}=f_{\theta}(\mathbf{x}_{i})\). In order to employ our novel loss function, we need to introduce a set of learnable class anchors \(C=\{\mathbf{c}_{1},\mathbf{c}_{2},...,\mathbf{c}_{l}\}\), where \(\mathbf{c}_{j}\in\mathbb{R}^{n}\) resides in the embedding space of \(f_{\theta}\), and \(t\) is the total number of classes.
**Loss components.** With the above notations, we can now formally define our first component, the attractor loss \(\mathcal{L}_{A}\), as follows:
\[\mathcal{L}_{A}(\mathbf{x}_{i},C)=\frac{1}{2}\|\mathbf{e}_{i}-\mathbf{c}_{y_{i }}\|_{2}^{2}. \tag{1}\]
The main goal of this component of the objective is to cluster feature vectors as close as possible to their designated class anchor by minimizing the distance between \(\mathbf{e}_{i}\) and the corresponding class anchor \(\mathbf{c}_{y_{i}}\). Its effect is to enforce low intra-class variance. However, obtaining low intra-class variance is only a part of what we aim to achieve. The first objective has little influence over the inter-class similarity, reducing it only indirectly. Therefore, another objective is required to repel samples from different classes. As such, we introduce the repeller loss \(\mathcal{L}_{R}\), which is defined as follows:
\[\mathcal{L}_{R}(C)=\frac{1}{2}\sum_{y,y^{\prime}\in\mathcal{V},y^{\prime}y^{ \prime}}\left\{\max\left(0,2\cdot m-\|\mathbf{c}_{y}-\mathbf{c}_{y^{\prime}} \|\right)\right\}^{2}, \tag{2}\]
where \(y\) and \(y^{\prime}\) are two distinct labels from the set of ground-truth labels \(Y\), and \(m>0\) is a margin representing the radius of an \(n\)-dimensional sphere around each anchor, in which no other anchor should lie. The goal of this component is to push anchors away from
Figure 1. An example showing the behavior of the attractor-repeller loss components for three classes. The stars represent the class anchors \(C\). Faded circles around class anchors represent the sphere of radius \(m\) around each anchor. Solid circles represent feature vectors generated by the encoder \(f_{\theta}\). Dashed arrows between feature vectors and class anchors represent the attraction of the force generated by the attractor \(\mathcal{L}_{A}\). The solid red arrow between class anchors represents the repelling force generated by the repeller \(\mathcal{L}_{R}\). Best viewed in color.
each other during training to ensure high inter-class distances. The margin \(m\) is used to limit the repelling force to an acceptable margin value. If we do not set a maximum margin, then the repelling force can push the anchors too far apart, and the encoder could struggle to learn features that satisfy the attractor loss defined in Eq. (1).
A toy example of the attractor-repeller mechanism is depicted in Figure 1. Notice how the optimization based on the attractor-repeller objective tends to pull data samples from the same class together (due to the attractor), and push samples from different classes away (due to the repeller). However, when the training begins, all data samples start from a location close to the origin of the embedding space, essentially having a strong tendency to pull the class anchors to the origin. To ensure that the anchors do not collapse to the origin (as observed in some of our preliminary experiments), we introduce an additional objective that imposes a minimum norm on the class anchors. The minimum norm loss \(\mathcal{L}_{N}\) is defined as:
\[\mathcal{L}_{N}(C)=\frac{1}{2}\sum_{y\in Y}\left\{\max\left(0,p-\left\|\mathbf{ c}_{y}\right\|\right)\right\}^{2}, \tag{3}\]
where \(p\) is the minimum norm that each anchor must have. This objective contributes to our full loss function as long as at least one class anchor is within a distance of \(p\) from the origin. Figure 2 provides a visual interpretation of the effect induced by the minimum norm loss. Notice how the depicted class anchor is pushed away from the origin (due to the minimum norm loss), while the data samples belonging to the respective class move along with their anchor (due to the attractor loss).
Assembling the three loss components presented above into a single objective leads to the proposed class anchor margin (CAM) loss \(\mathcal{L}_{\text{CAM}}\), which is formally defined as follows:
\[\mathcal{L}_{\text{CAM}}(\mathbf{x},C)=\mathcal{L}_{A}(\mathbf{x},C)+\mathcal{ L}_{R}(C)+\mathcal{L}_{N}(C). \tag{4}\]
Notice that only \(\mathcal{L}_{A}\) is directly influenced by the training examples, while \(\mathcal{L}_{R}\) and \(\mathcal{L}_{N}\) operate only on the class anchors. Hence, negative mining strategies are not required at all.
**Gradients of the proposed loss.** We emphasize that the \(L_{2}\) norm is differentiable. Moreover, the updates for the weights \(\theta\) of the encoder \(f_{\theta}\) are provided only by the attractor loss \(\mathcal{L}_{A}\). Hence, the weight updates (gradients of \(\mathcal{L}_{\text{CAM}}\) with respect to \(\theta\)) for some data sample \(\mathbf{x}_{i}\) are computed as follows:
\[\frac{\partial\mathcal{L}_{\text{CAM}}(\mathbf{x}_{i},C)}{\partial\theta}= \frac{\partial\mathcal{L}_{A}(\mathbf{x}_{i},C)}{\partial\theta}=\left(f_{ \theta}(\mathbf{x}_{i})-\mathbf{c}_{yi}\right)\cdot\frac{\partial f_{\theta} (\mathbf{x}_{i})^{T}}{\partial\theta}. \tag{5}\]
The class anchors receive updates from all three loss components. For a class anchor \(\mathbf{c}_{y}\), the update received from the component \(\mathcal{L}_{A}\) of the joint objective is given by:
\[\frac{\partial\mathcal{L}_{A}(\mathbf{x},C)}{\partial\mathbf{c}_{y}}=\mathbf{ c}_{y}-f_{\theta}(\mathbf{x}). \tag{6}\]
The contribution of the repeller to a class anchor \(\mathbf{c}_{y}\) is null when the \(m\)-margin sphere of the respective anchor is not intersecting another class anchor sphere, i.e.:
\[\frac{\partial\mathcal{L}_{R}(C)}{\partial C}=0, \tag{7}\]
when \(2\cdot m-\left\|\mathbf{c}_{y}-\mathbf{c}_{y^{\prime}}\right\|>0\), \(\forall y,y^{\prime}\in Y,y\neq y^{\prime}\).
To simply the notation, let \(d=\left\|\mathbf{c}_{y}-\mathbf{c}_{y^{\prime}}\right\|\) be the distance between \(\mathbf{c}_{y}\) and another class center \(\mathbf{c}_{y^{\prime}}\). The update for a class center \(\mathbf{c}_{y}\) of the repeller is given by:
\[\frac{\partial\mathcal{L}_{R}(C)}{\partial\mathbf{c}_{y}}=\sum_{y,y^{\prime} \in Y,y\neq y^{\prime}}\delta(d<2\cdot m)\cdot\frac{-(2\cdot m-d)\cdot(\mathbf{ c}_{y}-\mathbf{c}_{y^{\prime}})}{d}, \tag{8}\]
where \(\delta(*)=1\) when \(*\) is satisfied, and \(0\) otherwise. Similarly, the contribution of the minimum norm loss is given by:
\[\frac{\partial\mathcal{L}_{N}(C)}{\partial\mathbf{c}_{y}}=\sum_{y\in Y}\delta \left(\left\|\mathbf{c}_{y}\right\|<p\right)\cdot\left(1-\frac{p}{\left\| \mathbf{c}_{y}\right\|}\right)\cdot\mathbf{c}_{y}. \tag{9}\]
The final update of the learnable class centers \(C\) is given by summing the gradients given in Eq. (6), Eq. (8) and Eq. (9):
\[\frac{\partial\mathcal{L}(\mathbf{x},C)}{\partial\mathbf{c}_{y}}=\frac{ \partial\mathcal{L}_{A}(\mathbf{x},C)}{\partial\mathbf{c}_{y}}+\frac{\partial \mathcal{L}_{R}(C)}{\partial\mathbf{c}_{y}}+\frac{\partial\mathcal{L}_{N}(C)}{ \partial\mathbf{c}_{y}}. \tag{10}\]
**Fast retrieval via two-stage system.** During inference, we can employ the \(L_{2}\) measure between query and database embeddings to retrieve images similar to the query. Aside from the brute-force search that compares the query with every embedding in the database, we can harness the geometric properties of the resulting embedding space and use the class anchors \(C\) to improve the speed of the retrieval system. Instead of computing the \(L_{2}\) distances between a query feature vector and all the embedding vectors stored in the database, we propose to employ a two-stage system. In the first stage, distances are computed between the query feature vector and each class anchor. Upon finding the closest class anchor, in the second stage, distances are computed between the query feature vector and all the embeddings associated with the established class. Distances are then sorted and the closest corresponding items are retrieved. The main advantage of this approach is an improved retrieval time due to the reduced number of required comparisons. A possible disadvantage consists in retrieving wrong items when the retrieved class anchor is not representative for the query image.
Figure 2. Contribution of the minimum norm loss \(\mathcal{L}_{N}\) imposed on the class anchors. The blue star represents a class anchor. Solid circles represent embedding vectors generated by the encoder \(f_{\theta}\). Dashed arrows represent the attraction force generated by the attractor \(\mathcal{L}_{A}\). The solid gray line represents the direction in which the anchor is pushed away from the origin due to the component \(\mathcal{L}_{N}\). Best viewed in color.
We present results with our faster alternative in the comparative experiments.
**Visualizing the embedding space.** To emphasize the geometric effect of the proposed objective, we have modified the final encoder layer of a ResNet-18 model to output 2D feature vectors without the final non-linearity. The model is trained from scratch on the SVHN dataset by alternatively using the cross-entropy loss, the contrastive loss, and the proposed class anchor margin loss. The resulting embeddings, together with the distribution of the embeddings for each of the models, are presented in Figure 3. For the proposed objective, we have set \(m=4\) and \(p=1\). From the distribution of the embeddings, it can be noted that the proposed objective generates tighter and clearly separable clusters.
**Application to classification tasks.** An important remark is that we can use the class centers \(C\) to apply the proposed system to classification tasks. In this case, the predicted label \(\hat{y}\) for a test sample \(\mathbf{x}\) can be obtained as follows:
\[\hat{y}=\arg\min_{j}\|\hat{f}_{\theta}(\mathbf{x})-\mathbf{c}_{j}\|. \tag{11}\]
We perform additional experiments to demonstrate the application of our class anchor margin loss to classification tasks, with competitive results.
## 4. Experiments
In the experiments, we compare our class anchor margin loss with the cross-entropy and the contrastive learning losses on four datasets, considering both convolutional and transformer models.
### Datasets
We perform experiments on four datasets: CIFAR-100 (Krizhevsky et al., 2015), Food-101 (Krizhevsky et al., 2015), SVHN (Krizhevsky et al., 2015), and Tiny ImageNet (Vaswani et al., 2017). CIFAR-100 contains 50,000 training images and 10,000 test images belonging to 100 classes. Food-101 is composed of 101,000 images from 101 food categories. The official split has 750 training images and 250 test images per category. SVHN contains 73,257 digits for training, 26,032 digits for testing. Tiny ImageNet is a subset of ImageNet-1K, which contains 100,000 training images, 25,000 validation images and 25,000 test images from 200 classes.
### Experimental setup
As underlying neural architectures, we employ three ResNet (He et al., 2016) model variations (ResNet-18, ResNet-50, ResNet-101) and a Swin transformer (Krizhevsky et al., 2015) model (Swin-T). We rely on the PyTorch (Krizhevsky et al., 2015) library together with Hydra (Krizhevsky et al., 2015) to implement and test the models.
We apply random weight initialization for all models, except for the Swin transformer, which starts from the weights pre-trained on the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) dataset (Vaswani et al., 2017). We employ the Adam (Kingmare et al., 2014) optimizer to train all models, regardless of their architecture. For the residual neural models, we set the learning rate to \(10^{-3}\), while for the Swin transformer, we use a learning rate of \(10^{-4}\). The residual nets are trained from scratch for 100 epochs, while the Swin transformer is fine-tuned for 30 epochs. For the lighter models (ResNet-18 and ResNet-50), we use a mini-batch size of 512. Due to memory constraints, the mini-batch size is set to 64 for the Swin transformer, and 128 for ResNet-101. Residual models are trained with a linear learning rate decay with a factor of 0.5. We use a patience of 6 epochs for the full-set experiments, and a patience of 9 epochs for the few-shot experiments. Fine-tuning the Swin transformer does not employ a learning rate decay. Input images are normalized to have all pixel values in the range of \([0,1]\) by dividing the values by 256. The inputs of the Swin transformer are standardized with the image statistics from ILSVRC (Vaswani et al., 2017).
We use several data augmentation techniques such as random crop with a padding of 4 pixels, random horizontal flip (except for SVHN, where flipping the digits horizontally does not make sense), color jitter, and random affine transformations (rotations, translations). Moreover, for the Swin transformer, we added the augmentations described in (Vaswani et al., 2017).
For the models optimized either with the cross-entropy loss or our class anchor margin loss, the target metric is the validation
Figure 3. Embedding vectors (left) and their distribution (right) for a ResNet-18 model modified to output 2D features and trained with \((a)\) cross-entropy loss, \((b)\) contrastive loss and \((c)\) class anchor margin loss (ours). The top row represents training embeddings from the SVHN dataset, while the bottom row shows test embeddings. Best viewed in color.
accuracy. When using our loss, we set the parameter \(m\) for the component \(\mathcal{L}_{R}\) to 2, and the parameter \(p\) for the component \(\mathcal{L}_{N}\) to 1, across all datasets and models. Since the models based on the contrastive loss optimize in the feature space, we have used the 1-nearest neighbors (1-NN) accuracy, which is computed for the closest feature vector retrieved from the gallery.
As evaluation measures for the retrieval experiments, we report the mean Average Precision (mAP) and the precision@\(k\) on the test data, where \(k\in\{20,100\}\) is the retrieval rank. For the classification experiments, we use the classification accuracy on the test set. We run each experiment in 5 trials and report the average score and the standard deviation.
### Retrieval results with full training data
**Results with various architectures.** We first evaluate the performance of the ResNet-18, ResNet-50, ResNet-101 and Swin-T models on the CBIR task, while using the entire training set to optimize the respective models with three alternative losses, including our own. The results obtained on the CIFAR-100 (Krizhevsky et al., 2015), Food-101 (Krizhevsky et al., 2015), SVHN (Krizhevsky et al., 2015), and Tiny ImageNet (Vaswani et al., 2017) datasets are reported in Table 1. First, we observe that our loss function produces better results on the majority of datasets and models. Furthermore, as the rank \(k\) increases from 20 to 100, we notice that our loss produces more consistent results, essentially maintaining the performance level as \(k\) increases.
\begin{table}
\begin{tabular}{l l r r r r r r r r r r r} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{Loss}} & \multicolumn{3}{c}{CIFAR-100} & \multicolumn{3}{c}{Food-101} & \multicolumn{3}{c}{SVHN} & \multicolumn{3}{c}{Tiny ImageNet} \\ & & mAP & P@20 & P@100 & mAP & P@20 & P@100 & mAP & P@20 & P@100 & mAP & P@20 & P@100 \\ \hline \multirow{3}{*}{\begin{tabular}{l} \(\mathcal{L}_{N}\) \\ \end{tabular} } & CE & \(.249_{\pm 0.003}\) & \(.473_{\pm 0.005}\) & \(.396_{\pm 0.005}\) & \(.234_{\pm 0.002}\) & \(.547_{\pm 0.002}\) & \(.459_{\pm 0.001}\) & \(.895_{\pm 0.013}\) & \(.954_{\pm 0.003}\) & \(.952_{\pm 0.003}\) & \(.130_{\pm 0.001}\) & \(.340_{\pm 0.001}\) & \(.262_{\pm 0.001}\) \\ & CL & \(.220_{\pm 0.13}\) & \(.341_{\pm 0.10}\) & \(.303_{\pm 0.11}\) & \(.025_{\pm 0.001}\) & \(.042_{\pm 0.004}\) & \(.038_{\pm 0.003}\) & \(.116_{\pm 0.000}\) & \(.115_{\pm 0.002}\) & \(.116_{\pm 0.001}\) & \(.070_{\pm 0.12}\) & \(.139_{\pm 0.17}\) & \(.117_{\pm 0.16}\) \\ & CAM (ours) & \(.622_{\pm 0.005}\) & \(.560_{\pm 0.007}\) & \(.553_{\pm 0.007}\) & \(.751_{\pm 0.001}\) & \(.676_{\pm 0.003}\) & \(.669_{\pm 0.003}\) & \(.967_{\pm 0.001}\) & \(.954_{\pm 0.000}\) & \(.953_{\pm 0.001}\) & \(.508_{\pm 0.004}\) & \(.425_{\pm 0.004}\) & \(.418_{\pm 0.005}\) \\ \hline \multirow{3}{*}{\begin{tabular}{l} \(\mathcal{L}_{N}\) \\ \end{tabular} } & CE & \(.211_{\pm 0.006}\) & \(.454_{\pm 0.007}\) & \(.366_{\pm 0.006}\) & \(.158_{\pm 0.004}\) & \(.471_{\pm 0.005}\) & \(.370_{\pm 0.005}\) & \(.909_{\pm 0.010}\) & \(.958_{\pm 0.001}\) & \(.956_{\pm 0.002}\) & \(.088_{\pm 0.003}\) & \(.292_{\pm 0.006}\) & \(.209_{\pm 0.005}\) \\ & CL & \(.164_{\pm 0.16}\) & \(.271_{\pm 0.17}\) & \(.240_{\pm 0.16}\) & \(.019_{\pm 0.000}\) & \(.030_{\pm 0.000}\) & \(.028_{\pm 0.001}\) & \(.372_{\pm 0.310}\) & \(.447_{\pm 3.561}\) & \(.437_{\pm 3.564}\) & \(.005_{\pm 0.000}\) & \(.008_{\pm 0.001}\) & \(.007_{\pm 0.000}\) \\ & CAM (ours) & \(.640_{\pm 0.008}\) & \(.581_{\pm 0.009}\) & \(.578_{\pm 0.009}\) & \(.765_{\pm 0.005}\) & \(.697_{-0.008}\) & \(.697_{-0.009}\) & \(.969_{\pm 0.001}\) & \(.956_{\pm 0.001}\) & \(.957_{\pm 0.001}\) & \(.543_{\pm 0.003}\) & \(.472_{\pm 0.002}\) & \(.468_{\pm 0.002}\) \\ \hline \multirow{3}{*}{\begin{tabular}{l} \(\mathcal{L}_{N}\) \\ \end{tabular} } & CE & \(.236_{\pm 0.009}\) & \(.482_{\pm 0.008}\) & \(.397_{\pm 0.009}\) & \(.160_{\pm 0.003}\) & \(.479_{\pm 0.008}\) & \(.376_{\pm 0.007}\) & \(.936_{\pm 0.003}\) & \(.958_{\pm 0.001}\) & \(.957_{\pm 0.001}\) & \(.093_{\pm 0.002}\) & \(.299_{\pm 0.002}\) & \(.216_{\pm 0.002}\) \\ & CL & \(.034_{\pm 0.028}\) & \(.069_{\pm 0.051}\) & \(.056_{\pm 0.044}\) & \(.014_{\pm 0.002}\) & \(.018_{\pm 0.004}\) & \(.017_{\pm 0.003}\) & \(.595_{\pm 2.420}\) & \(.700_{\pm 2.95}\) & \(.696_{\pm 2.03}\) & \(.006_{\pm 0.001}\) & \(.007_{\pm 0.002}\) & \(.007_{\pm 0.002}\) \\ & CAM (ours) & \(.629_{\pm 0.006}\) & \(.575_{\pm 0.008}\) & \(.573_{\pm 0.008}\) & \(.758_{\pm 0.007}\) & \(.690_{\pm 0.007}\) & \(.693_{\pm 0.006}\) & \(.969_{\pm 0.001}\) & \(.956_{\pm 0.001}\) & \(.956_{\pm 0.001}\) & \(.529_{\pm 0.005}\) & \(.458_{\pm 0.006}\) & \(.455_{\pm 0.006}\) \\ \hline \multirow{3}{*}{
\begin{tabular}{l} \(\mathcal{L}_{N}\) \\ \end{tabular} } & CE & \(.661_{\pm 0.004}\) & \(.808_{\pm 0.001}\) & \(.770_{\pm 0.001}\) & \(.617_{\pm 0.006}\) & \(.817_{\pm 0.002}\) & \(.777_{\pm 0.003}\) & \(.963_{\pm 0.001}\) & \(.968_{\pm 0.000}\) & \(.968_{\pm 0.000}\) & \(.560_{\pm 0.006}\) & \(.743_{\pm 0.001}\) & \(.707_{\pm 0.003}\) \\ & CL & \(.490_{\pm 0.10}\) & \(.629_{\pm 0.007}\) &
Results with various embedding dimensions.We train a ResNet-50 encoder from scratch, where the last encoder layer is modified to output embeddings of various dimensions from the set \(\{32,64,128,256,512,1024,2048\}\) on the CIFAR-100 (Krizhevsky et al., 2015) dataset. We illustrate the results with the cross-entropy, contrastive learning and class anchor margin losses in Figure 4. We observe that the performance is degrading as the embedding size gets larger for models trained with cross-entropy or contrastive losses. In contrast, the presented results show that the performance is consistent for the ResNet-50 based on our objective. This indicates that our model is more robust to variations of the embedding space dimension \(n\).
Convergence speed.We have also monitored the convergence speed of ResNet-50 models, while alternatively optimizing with cross-entropy, contrastive learning and class anchor margin losses. In Figure 5, we show how the ResNet-50 models converge over 100 training epochs. The model trained with contrastive loss exhibits a very slow convergence speed. The model trained with cross-entropy achieves its maximum performance faster than the model trained with contrastive-loss, but the former model reaches a plateau after about 60 epochs. The model trained with our CAM loss converges at a faster pace compared with the other models.
### Few-shot retrieval results
We next evaluate the neural models on the few-shot object retrieval task. For each dataset, we sample a certain number of training images from each class, starting from one example per class. We gradually increase the number of training samples, doubling the number of training examples per class in each experiment, until we reach the maximum amount of available images. In Figure 6, we present the corresponding results on CIFAR-100 (Krizhevsky et al., 2015), Food-101 (Friedman et al., 2015), SVHN (Krizhevsky et al., 2015), and Tiny ImageNet (Zhu et al., 2016).
For all three ResNet models, our CAM loss leads to better results, once the number of training samples becomes higher or equal to 4 per class. In all cases, the contrastive loss obtains the lowest performance levels, being surpassed by both cross-entropy and class anchor margin losses. For the Swin-T model, cross-entropy usually leads to better results when the number of samples per class is in the lower range (below 64). As the number of training samples increases, our loss recovers the gap and even surpasses cross-entropy after a certain point (usually when the number of samples per class is higher than 128). In general, all neural models obtain increasingly better performance levels when more data samples are available for training. With some exceptions, the models trained with our CAM loss achieve the best performance. In summary, we conclude that the class anchor margin loss is suitable for few-shot retrieval, but the number of samples per class should be generally higher than 4 to obtain optimal performance.
### Qualitative results
We choose the ResNet-50 model and inspect the retrieved images for each of the three loss functions. In Figure 7, we show a set of eight randomly sampled queries from the four datasets (CIFAR-100, Food-101, SVHN, and Tiny ImageNet). The model based on our loss seems to return more consistent results, with the majority of images representing the same object as the query. In contrast, models trained with the other losses can sometimes retrieve items that do
Figure 6. Few-shot retrieval performance (mAP) of four models (ResNet-18, ResNet-50, ResNet-101 and Swin-T) on four datasets (CIFAR-100, Food-101, SVHN and Tiny ImageNet). On each dataset, the results are shown from one sample per class (one-shot learning) to the maximum number of samples per class, by doubling the number of training samples in each trial.
not always belong to the query category. Overall, the qualitative results confirm the superiority of our class anchor margin loss.
### Ablation study
**Ablating the loss.** In Table 3, we demonstrate the influence of each additional loss component on the overall performance of ResNet-18 on the SVHN dataset, by ablating the respective components from the proposed objective. We emphasize that the component \(\mathcal{L}_{A}\) is mandatory for our objective to work properly. Hence, we only ablate the other loss components, namely \(\mathcal{L}_{R}\) and \(\mathcal{L}_{N}\).
In addition, we investigate different class center initialization heuristics. As such, we conduct experiments to compare the random initialization of class centers and the base vector initialization. The latter strategy is based on initializing class anchors as scaled base vectors, such that each class center has no intersection with any other class center in the \(n\)-dimensional sphere of radius \(m\), where \(n\) is the size of the embedding space.
As observed in Table 3, the class center initialization has a major impact on the overall performance. For each conducted experiment, we notice a significant performance gain for the base vector initialization strategy. Regarding the loss components, we observe that removing both \(\mathcal{L}_{R}\) and \(\mathcal{L}_{N}\) from the objective leads to very low performance. Adding only the component \(\mathcal{L}_{N}\) influences only the overall accuracy, but the mAP is still low, since \(\mathcal{L}_{N}\) can only impact each anchor's position with respect to the origin. Adding only the component \(\mathcal{L}_{R}\) greatly improves the performance, proving that \(\mathcal{L}_{R}\) is crucial for learning the desired task. Using both \(\mathcal{L}_{R}\) and \(\mathcal{L}_{N}\) further improves the results, justifying the proposed design.
**Ablating the two-stage retrieval system.** As discussed in Section 3, we employ a two-stage retrieval system to speed up the retrieval process. We hereby ablate the proposed two-stage approach that leverages the class anchors, essentially falling back to the brute-force retrieval process, in which the query is compared with every
Figure 7. Top 6 retrieved items by a ResNet-50 model trained with one of three losses: cross-entropy (CE), contrastive (CL), and class anchor margin (CAM). We randomly selected two queries per dataset. Best viewed in color.
embedding vector from the database. We compare the ablated (brutefore) retrieval with the two-stage retrieval in Table 2. Remarkably, we observe that our two-stage retrieval system not only improves the retrieval speed, but also the mAP. In terms of time, the two-stage retrieval improves the speed by a factor ranging between 2\(\times\) and 3\(\times\). In terms of performance, the gains are higher than 10% in 9 out of 16 cases. These results illustrate the benefits of our two-stage retrieval process based on leveraging the class anchors.
### Classification results
As earlier mentioned, we can leverage the use of the learned class centers and the predictions generated with Eq. (11) to classify objects into the corresponding categories. We present the classification accuracy rates of the ResNet-18 and ResNet-50 models in Table 4, while alternating between the cross-entropy and the class anchor margin losses. Our CAM loss provides competitive results, surpassing cross-entropy in 6 out of 8 cases. This confirms that our loss is also suitable for the classification task, even though its performance gains are not as high as for the retrieval task.
## 5. Conclusion
In this paper, we proposed (\(i\)) a novel loss function based on class anchors to optimize convolutional networks and transformers for object retrieval in images, as well as (\(ii\)) a two-stage retrieval system that leverages the learned class anchors to speed up and increase the performance of the system. We conducted comprehensive experiments using four neural models on four image datasets, demonstrating the benefits of our loss function against conventional losses based on statistical learning and contrastive learning. We also performed ablation studies to showcase the influence of the proposed components, empirically justifying our design choices.
In future work, we aim to extend the applicability of our approach to other data types, beyond images. We also aim to explore new tasks and find out when our loss is likely to outperform the commonly used cross-entropy.
| ニューラルネットワークのコンテンツベース画像検索(CBIR)における性能は、選択された損失(目的関数)によって大きく影響されます。ニューラルモデルの多くの目的関数には、 metrica 学習と統計的学習の2つがあります。 metrica 学習アプローチには、効率性に欠けるペアの mined 戦略が求められ、統計的学習アプローチは、間接的な特徴の最適化のため、高度なコンパクトな特徴を生成していません。そこで、この論文では、 metrica 学習パラダイムに属する新しいランナー-アトラクター損失を提案します。この損失は、ペアの生成を必要としないように、直接 L2 メトリクを最適化します。この損失は、3つの構成要素で構成されます。最初の目的関数は、学習された特徴が指定された学習可能なクラスのアンカーに惹きつけられるようにする。第二の損失コンポーネントは、アン |
2302.03816 | Intend-Wait-Perceive-Cross: Exploring the Effects of Perceptual
Limitations on Pedestrian Decision-Making | Current research on pedestrian behavior understanding focuses on the dynamics
of pedestrians and makes strong assumptions about their perceptual abilities.
For instance, it is often presumed that pedestrians have omnidirectional view
of the scene around them. In practice, human visual system has a number of
limitations, such as restricted field of view (FoV) and range of sensing, which
consequently affect decision-making and overall behavior of the pedestrians. By
including explicit modeling of pedestrian perception, we can better understand
its effect on their decision-making. To this end, we propose an agent-based
pedestrian behavior model Intend-Wait-Perceive-Cross with three novel elements:
field of vision, working memory, and scanning strategy, all motivated by
findings from behavioral literature. Through extensive experimentation we
investigate the effects of perceptual limitations on safe crossing decisions
and demonstrate how they contribute to detectable changes in pedestrian
behaviors. | Iuliia Kotseruba, Amir Rasouli | 2023-02-08T00:47:51 | http://arxiv.org/abs/2302.03816v1 | Intend-Wait-_Perceive_-Cross: Exploring the Effects of Perceptual Limitations on Pedestrian Decision-Making
###### Abstract
Current research on pedestrian behavior understanding focuses on the dynamics of pedestrians and makes strong assumptions about their perceptual abilities. For instance, it is often presumed that pedestrians have omnidirectional view of the scene around them. In practice, human visual system has a number of limitations, such as restricted field of view (FoV) and range of sensing, which consequently affect decision-making and overall behavior of the pedestrians. By including explicit modeling of pedestrian perception, we can better understand its effect on their decision-making. To this end, we propose an agent-based pedestrian behavior model Intend-Wait-Perceive-Cross with three novel elements: field of vision, working memory, and scanning strategy, all motivated by findings from behavioral literature. Through extensive experimentation we investigate the effects of perceptual limitations on safe crossing decisions and demonstrate how they contribute to detectable changes in pedestrian behaviors.
## I Introduction
Accurately predicting pedestrian behaviors is important for intelligent driving due to inherent risks of interactions between traffic participants [1]. Current research relies on large naturalistic datasets (_e.g._ Waymo [2] and Argoverse [3]) and simulation environments (_e.g._ CARLA [4]) for modeling behaviors of road users. Strong assumptions about their perceptual abilities are often made; although often not explicitly stated, the agents are presumed to have an omnidirectional view of the scene around them (_e.g._ cropped map centered around the agent [5]). Although this may be true for autonomous vehicles with extensive sensor suites and human drivers who have access to mirrors, pedestrians rely only on their vision with limited field of view (FoV) and sequentially scan the scene during decision-making.
Top-down (or bird's eye) views common in the autonomous driving datasets provide information on the dynamics of the agents but not their awareness or decision-making process. The latter may be captured implicitly, however, such unobserved variables cannot be effectively learned as recent findings indicate [6]. Likewise, training on synthetic data with similar characteristics would increase sim-real gap, therefore simulations would greatly benefit from integration of more psychologically-plausible elements [7].
In this paper, we address the issue of explicitly modeling perceptual limitations by extending our previous work Intend-Wait-Cross [8] - a microscopic agent-based model of pedestrian crossing behavior motivated by psychological studies. In our experiments, we demonstrate how perception operates and how it contributes to detectable changes in pedestrian behavior.
## II Related Work
**Pedestrian behavior simulation.** Microscopic agent-based simulations have successfully modeled a wide variety of phenomena emerging from interactions between multiple heterogeneous agents, particularly in the transportation domain [9]. Due to risks involved, much of the literature focuses on jaywalking [10, 8] and unsignalized crossing scenarios [11, 12, 13, 14, 11], although some studies consider illegal crossings at signalized sites as well [15].
**Pedestrian perception models.** Unlike behavioral and demographic characteristics of pedestrians (_e.g._ preferences, age, gender, walking speed), effects of perceptual limitations are less investigated and are incorporated only in some models. The most frequently modeled aspects are limited field of view and occlusions. For example, Lu _et al._[13] introduce driver's vision distance defined as a minimum distance needed for the vehicle to come to stop and also the moment at which the driver begins to pay attention to the pedestrian near the crossing area. The authors of [12, 14] implement visual field of drivers and pedestrians as circular sectors with visibility radius of 100 m and 60 m, respectively. Without occlusions by other objects, pedestrians have a \(180^{\circ}\) view. Drivers' visual field is adjusted depending on their speed to reflect tunneling, _e.g._\(132^{\circ}\) for \(<20\) km/h and \(108^{\circ}\) for \(>40\) km/h.
Behavioral studies indicate that limited field of view is supported by representations in short-term memory that
Fig. 1: Pedestrian’s perception of the road. Top row: approach in [8] where pedestrian sees the entire road. Bottom row: the proposed perception model with limited field of view and working memory (shown as a thought bubble).
helps select and preserve spatial information across multiple viewpoints and enable decision-making and actions [16, 17]. However, memory is notably absent from many agent simulations. As an exception, simulation in [18] includes multi-purpose synthetic perception (including attention and FoV) and short-term memory, however, without specifics of implementation and limited proof-of-concept demonstrations in video game scenarios.
**Learning crossing behavior.** In intelligent driving, planning relies on action and trajectory predictions of the other road users. While dynamic information is still the most dominant feature for prediction [19], taking into account different perceptual abilities of the traffic participants has demonstrable benefits. For example, [20] extends the energy-based model for pedestrian trajectory prediction with a frustum of attention (30\({}^{\circ}\) circular sector) towards where the pedestrian is heading. [21] combines top-down view and simulated first person view with limited vision and attention to predict pedestrian trajectories. Similarly, [22] uses pedestrian and vehicle perspectives to model their trajectories.
Motivated by the benefits of explicit modeling of pedestrian perception in traffic simulation and prediction applications, we propose a pedestrian agent model with the following novel components (illustrated in Figure 1): 1) a model of FoV with sensing range and viewing angle; 2) scanning strategy for observing the scene; and 3) working memory for temporary storage of observations.
## III Pedestrian Behavior Model
The proposed behavior model is based on the approach in [8]. This model considers various choices that pedestrians make, including their choice of transit, route, activity, speed, and next step. These choices are predominantly impacted by pedestrian characteristics: _type_ which is based on the age and determines walking speed; _trait_ - aggressive, conservative, or average - that affects walking speed and gap acceptance of the pedestrian; _law obedience_ - violating, obedient, and average - that determines likelihood of the pedestrian to jaywalk or go to the crosswalk; _accepted gap_ that the pedestrian considers safe to cross; _crossing pattern_ which including one-stage (cross when all lanes are safe) and rolling (cross when the immediate lane is safe); _perceptual noise_ - error in the pedestrian estimation of the dynamic agents. In addition, the model of [8] proposes strategies for identifying relevant agents in the scene, estimating their TTCs, and changing behavior of the pedestrians due to the traffic conditions.
To better highlight the importance of modeling FoV and short-term memory, we focus on particular types of pedestrians. More specifically, we use law violating pedestrians as they rely on scanning their surroundings when they intend to cross as opposed to law abiding pedestrians at signalized intersections who mainly observe the signal when waiting to cross. In addition, we focus on one-stage crossing, where the pedestrian starts to cross only when sufficient gaps are available in all lanes.
Unlike the method in [8], instead of allowing pedestrians to see the entire road in front of them at all times, we propose a psychologically-motivated perception model that limits the ability of the pedestrians to assess their surroundings. The details of the proposed approach are discussed in the following sections.
## IV Pedestrian Perception
In our model, pedestrian field of view is limited by sensing range and angle of view. The sensing range corresponds to the maximum distance within which pedestrian can observe objects and reason about their dynamics, _e.g._ changes in their speed. Angle of view, as the name implies, defines the angular extent of the scene observable by the pedestrian.
To simulate foveation of human vision, _i.e._ resolution that is higher in the center of the visual field than in the periphery [23], we divide visual field into sectors with varying sensing range and angular extent. As illustrated in Figure 2, FoV is composed of four circular sectors, namely, _focus_, _central_, _peripheral_, and _mono_. Central _focus_ cone has the longest sensing range but the smallest angular extent, and _mono_, which corresponds to the monocular vision, has the shortest sensing range and the largest angular extent.
In the proposed decision model, the vehicles are considered as seen when any part of them fall within FoV. To determine the visibility of a vehicle, we consider two reference points - the center coordinates of the front and back bumpers of the vehicle - as follows:
\[\begin{split}& Veh_{seen}=\forall veh^{i}\wedge\forall view^{k}| \theta_{veh^{i}}<=\frac{1}{2}view_{ang}^{k}\\ &\wedge veh^{i}_{dist}<=view_{sr}^{k},i\in 1,2,...,n\\ & k\in\{\text{focus},\text{central},\text{peripheral},\text{ mono}\},\end{split} \tag{1}\]
where \(ang\) and \(sr\) are angular extent and sensing range of the FoV sector, respectively. The angle \(\theta_{veh^{i}}\) is given by
\[\begin{split}&\measuredangle v^{i}p=tan^{-1}\frac{veh^{i}_{y}-ped_{y}}{ veh^{i}_{x}-ped_{x}}*180/\pi\\ &\theta^{i}_{v}=(\measuredangle v^{i}p-\theta_{p}+360)\mod 360\\ &\theta_{p}=\phi_{head}+\phi_{body}.\end{split}\]
Fig. 2: Field of view divided into 4 regions that vary depending on sensing range and angle of view.
Here \(\measuredangle v^{i}p\) is the angle between \(veh_{i}\) and the pedestrian \(ped\), and \(\phi_{head}\) and \(\phi_{body}\) are the pedestrian's head and body angles respectively. Once a vehicle is seen, its information is registered in the working memory and can be used for making crossing decisions.
## V Working Memory
### _Memory contents_
Given that pedestrians can observe only a part of the scene at a time, temporary storage (here referred to as working memory) is needed to aggregate observations across multiple views. In doing so, working memory maintains the states of the dynamic objects that are not currently seen and updates their status based on the beliefs of the pedestrian.
In the proposed memory model, every time a vehicle is seen, its id, dynamic characteristics, and the observation timestamps are registered in the memory. After that, depending on whether the vehicle is still being observed or not, its status is updated following an update rule.
### _Memory update_
To maintain a reasonable representation of the scene in the memory, its state should be updated at every time step. The status of the vehicles currently seen can be updated based on their true state in the world. However, the status of the vehicles that are registered in the memory but are not currently in the field of view (FoV) should be updated based on a belief system. An overview of the memory registration and update is reflected in Figure 3.
In our model, positions of registered vehicles that are not currently in FoV are updated by applying a dynamic update rule to their last observed states (speed, position, acceleration) and the time has passed since the vehicles were last seen. The updating position of a vehicle \(\widehat{veh}\) in memory is computed as follows:
\[\widehat{veh}_{pos}=[veh_{x}+sin(\phi_{v})\cdot\widehat{d},veh_{y}+cos(\phi_{ v})\cdot\widehat{d}] \tag{2}\]
where \(\phi_{v}\) is the heading angle of the vehicle at the time of observation and \(\widehat{d}\) is the distance that the pedestrian believes the vehicle has travelled since last observation. This distance is given by
\[\hat{d}=veh_{s}\cdot(t-t_{v\_seen})+\frac{1}{2}\cdot veh_{accl}\cdot(t-t_{v\_seen })^{2}\]
where \(veh_{s}\) and \(veh_{accl}\) are the vehicle's last seen speed and acceleration, and \(t_{v\_seen}\) is the time the vehicle was last seen.
From the above formulation one can see that the pedestrian estimation of the vehicle's state may not be accurate due to the changes in the environment. For example, while pedestrian is not looking, vehicles may slow down or accelerate in response to traffic signals or other vehicles' actions. Naturally, this error accumulates as long as the pedestrian looks away from the given vehicle, thus increasing the risk of crossing decision based on outdated information.
Vehicles registered in the memory but not in FoV can be observed again, _e.g_. due to the pedestrian's head movement or if the vehicle moves into pedestrians' FoV. In such cases, status of vehicles is reset to their true state in the world.
### _Removal from memory_
In our memory model, the vehicles' information are stored and maintained as long as their current position is within the maximum sensing range of the pedestrian, _i.e_. within the sensing range of the _focus_ sector. Once the vehicles are believed to leave or seen to leave maximum sensing range, their information is removed from the memory. Additionally, if vehicles become irrelevant (_e.g_. passes the pedestrian) and no longer pose any risk, they are removed from the memory.
Finally, when the pedestrian finishes crossing, storing the state of the vehicles on the road becomes unnecessary and the contents of working memory is flushed out.
Fig. 3: Illustration of pedestrian observing the environment while registering and updating positions of the vehicles in memory. At \(t_{0}\), the agent sees only the three vehicles in the FoV and registers them in memory (shown as a thought bubble). Since these vehicles are currently within the FoV, their true state (indicated by green color) is stored. At \(t_{1}\) the pedestrian looks in the opposite direction, sees another three vehicles, and registers them. Status of the previously seen vehicles currently not in FoV does not truly reflect reality anymore due to the estimation error. As the time passes, the status of the vehicles not in FoV becomes more uncertain (shown in gradually fading red color). At \(t_{3}\), one of the vehicles re-enters pedestrian’s FoV and its state is reset to true value.
## VI Scanning Strategy
Having a limited field of view, the pedestrian scans the environment sequentially. Thus, a strategy is needed to observe relevant parts of the scene in a timely manner. For the roads with right-hand traffic, a reasonable strategy is to look first to the left, and if the road is clear, then look to the right. We follow a similar approach to model the scanning behavior of pedestrians (see diagrams in Figure 4).
The pedestrian starts by looking to the left until they find safe gap for potential crossing. Then they turn to the right and look for a safe gap on the other side. If the right side is safe, they start crossing while pointing their head towards the direction of the traffic on the current lane.
In the absence of working memory, if the pedestrian does not find a safe gap on the right side, they have to immediately look to the left again as the status of the next immediate lane which is not currently in FoV is unknown to them. This results in very fast head movement and unrealistic behavior.
Working memory allows the pedestrian to assess the right side while updating the status of the vehicles seen previously on the left. Hence, if a safe gap is identified on the right within a reasonable time, they can commence crossing without the need to look back.
As mentioned earlier, the pedestrian's estimation of the vehicles' states not in FoV can be erroneous, and the error increases as the time passes. Moreover, there is a chance that unobserved vehicles can get too close to the pedestrians. As a result, if a safe gap was not identified after the first scan, the pedestrian should look back and forth to maintain more accurate representation of the environment.
To achieve this behavior, we setup a threshold, \(th_{scan}\) to induce the pedestrian to move their head to the opposite direction if no safe gap was found. Specifically, the pedestrian changes head direction if \(t_{cur}-t_{upd}>th_{scan}\), where \(t_{cur}\) is the current time step and \(t_{upd}\) is the last time pedestrian looked in the opposite direction. \(th_{scan}\) is determined as
\[th_{scan}=max(view^{k}_{sr})/road_{max_{s}}-ped_{c\_gap} \tag{3}\]
where \(road_{max_{s}}\) is the speed limit and \(ped_{c\_gap}\) is gap accepted by the pedestrian.
## VII Experiments
We use a similar experimental setup as in [8] and the following default values for the perceptual components: sensing range - 80m, angular extents of view fields (focus - \(30^{\circ}\), central - \(60^{\circ}\), peripheral - \(120^{\circ}\), mono - \(190^{\circ}\)) and dynamic memory update.
We also set the law obedience of all pedestrians to _law violating_ since perceptual limitations are more safety-critical during jaywalking as opposed to crossing at designated crossings. Pedestrian type of all agents is set to _adult_ since we do not model differences in perception across ages.
### _Effect of sensing range_
To examine the effect of sensing range, which determines how far the pedestrian can perceive the objects, we use the
Fig. 4: An overview of different scanning strategies in a) the absence of FoV model and memory as in [8], b) with only FoV model and c) the proposed model with FoV and working memory.
following setup: 270m long straight road segment with no traffic lights; sensing range varied from 10m to 100m with a step of 10m; light, medium, and heavy traffic is simulated by generating 1 vehicle every 6, 4, and 2s, respectively. Pedestrian memory is not used in these experiments and other parameters are set to default values. We run the simulation for all combinations of sensing range and traffic density 5 times for 500 steps with fixed random seeds.
Visualization of the results can be seen in Figure 5. As expected, wait time grows and number of accidents decreases as both sensing range and traffic density increase. Visibility below 50m generally leads to very short wait times and very high number of collisions since pedestrians do not see approaching vehicles and start crossing. Above 50m, the mean waiting time continues to grow in all traffic conditions. Note that when pedestrians can see farther ahead, their wait time becomes more variable as it now depends more on the available gaps and risk taking characteristics of pedestrians (size of the gap they will accept).
### _Effect of field of view and memory_
Next, we test how the addition of FoV and memory affects behaviors of pedestrians. We set up a 4-way intersection with 2 lanes in each direction and vary traffic density (600, 900, and 1200 veh/h). We fix the sense range parameters at default values and test the following combinations of sense range and memory updates: _all_ - the original Intent-Wait-Cross model with omnidirectional perception [8], _FoV_ - model with the limited field of view and no memory, and _FoV+mem_ - model with both FoV and memory enabled. Additionally, we test the effects of perceptual noise on each condition. As before, each combination is run 5 times for 500 steps with fixed random seeds.
Table I summarizes the results, showing the effects of noise, observation mode and memory updates on decision-making in terms of waiting time and accepted gap, as well as number of collisions between vehicles and pedestrians.
Without perceptual noise, as expected, introduction of FoV increases wait time because pedestrians sequentially scan the road before crossing. Addition of the memory decreases the wait time by reducing the need for repeated scanning. The same trend can be observed in the case of noisy observations. However, the wait time in the _all_ condition is much higher due to the noise accumulation across the whole scene. The effect of noise is reduced by FoV and even more so by memory since updates rely on internal information rather than noisy observations.
Limiting observation increases the chance of riskier decision as apparent in the reduced gap acceptance in the case of _FoV_. Introduction of memory (_FoV+mem_) increases the accepted gap, however it is still below the condition when the pedestrians see the entire scene (_all_).
As one would expect, with limited view, the number of collisions increases, which can be remedied to some extent by memory. But memory update uncertainty combined with perceptual noise and variable traffic leads to less predictable outcomes. In the light traffic conditions, vehicles are more spaced out and have more freedom to move around. This results in more variable behavior, which, combined with noisy internal representation, increases chance of collisions.
\begin{table}
\begin{tabular}{l l|c c c|c c c|c c c c} & & \multicolumn{3}{c|}{Wait time (s)} & \multicolumn{3}{c|}{Min TTC (s)} & \multicolumn{3}{c}{Num. veh-person collisions} \\ \cline{2-13} & Traffic & light & med & heavy & light & med & heavy & light & med & heavy \\ \cline{2-13} W/o noise & all & 4.7 (8.0) & 6.1 (9.5) & 8.6 (11.8) & 6.8 (4.5) & 6.3 (4.5) & 5.0 (4.3) & 0.0 (0.0) & 0.3 (0.5) & 0.0 (0.0) \\ & FoV & 5.7 (9.0) & 6.5 (9.8) & 8.5 (12.2) & 4.9 (4.4) & 7.4 (2.4) & 4.3 (4.2) & 0.3 (0.5) & 0.3 (0.7) & 0.2 (0.4) \\ & FoV+mem & 5.3 (8.3) & 6.1 (9.5) & 7.5 (10.9) & 5.5 (4.5) & 5.2 (4.4) & 4.6 (4.3) & 0.1 (0.3) & 0.3 (0.5) & 0.2 (0.4) \\ \cline{2-13} & all & 5.7 (8.9) & 8.7 (12.9) & 11.0 (14.5) & 7.1 (4.5) & 6.1 (4.4) & 5.4 (4.3) & 0.5 (0.7) & 1.0 (0.8) & 0.4 (0.8) \\ & FoV & 5.7 (8.7) & 6.7 (10.4) & 7.9 (11.8) & 5.4 (4.4) & 4.4 (4.2) & 4.6 (4.2) & 0.2 (0.4) & 0.3 (0.5) & 0.7 (0.7) \\ & FoV+mem & 5.0 (7.7) & 6.4 (9.8) & 7.6 (11.2) & 5.8 (4.4) & 5.0 (4.4) & 5.1 (4.4) & 0.7 (0.7) & 0.4 (0.5) & 0.4 (0.5) \\ \end{tabular}
\end{table} TABLE I: Effects of observation mode, memory, and perceptual noise on crossing and collisions. _All_ - omnidirectional perception with no memory as in [8]; _FoV_ - field of view; _FoV+mem_ uses _dynamic_ memory update (vehicle speed and position are calculated taking into account acceleration at the time of last update). Results for each mode are reported with and without perceptual noise.
Fig. 5: Effect of sensing range on (a) wait time and (b) number of collisions between vehicles and pedestrians. Solid lines and shaded regions show mean values and standard deviations.
In congested traffic, vehicles behave more uniformly and more predictably, therefore memory reduces the average number of collisions.
### _Perceptual limitations and crossing behavior_
Here, we look into effects of perceptual limitations prior and during the crossing. We use the same setup is as in the previous experiment, with medium traffic density and perceptual parameters set to default values. The results are summarized in Table II.
**Head movement** is often seen as a cue for intention to cross [24]. As expected, without internal memory representation, the number of head turns increases because pedestrians repeatedly observe the environment to find a safe gap to cross. This confirms the validity of the proposed perceptual and memory components.
**Distraction during crossing.** In the experiments so far, pedestrians continuously monitored the traffic post crossing decision and reacted to imminent risk by stopping mid-road to wait for another gap. Here, pedestrians complete the crossing without checking traffic, which doubles the number of accidents. Without memory, the initial crossing decision is made based on the final observation, whereas with memory the status of previously observed vehicles is also considered, helping to make a better decision.
## VIII Discussion and future work
In this paper, we argued that for realistic modeling of pedestrian behaviors their perceptual limitations should be taken into account. We proposed the Wait-Intend-Perceive-Cross model with three psychologically-motivated components: limited FoV, working memory, and scanning strategy.
We conducted experiments to validate the proposed components and demonstrated their significant effects on pedestrian behavior, notably waiting times, head movements, gap acceptance, and collision risk. This suggests that perceptual limitations should be considered when modeling pedestrian behaviors for planning in the context of intelligent driving systems. In turn, a new approach to data collection is needed to record information, such as pedestrian demographics, head movement, and other relevant visual information. Although we made effort to maximize the realism of the proposed model, lack of data limited our ability to validate it and calibrate its parameters.
Extensions to be considered for future work include modeling perceptual limitations of drivers, effects of occlusions, and testing the impact of realistic perceptual models on the performance of prediction algorithms.
| 歩行者行動理解の現在の研究は、歩行者たちの動的な側面に焦点を当て、その視覚的な能力について強い仮説を立てています。例えば、歩行者には周囲の環境を全方位から見ることができるという仮説がしばしばあります。しかし、実際には人間の視覚システムには、視野の制限や検知範囲の制限など、いくつかの制限があり、これらが行動の決定や歩行者の行動に影響を与えています。歩行者認識を明示的にモデル化することで、その影響をより深く理解することができます。このため、私たちは、歩行者行動モデル「Intend-Wait-Perceive-Cross」を提案しました。このモデルには、視野、作業メモリ、そしてscaning戦略の3つの新しい要素が含まれており、これは、行動学からの発見に基づいています。様々な実験を通じて、感覚的な制限が安全な渡り方決定に与える影響を調べ、歩行者の行動の変化を検 |
2305.16502 | Learning When to Ask for Help: Efficient Interactive Navigation via
Implicit Uncertainty Estimation | Robots operating alongside humans often encounter unfamiliar environments
that make autonomous task completion challenging. Though improving models and
increasing dataset size can enhance a robot's performance in unseen
environments, data collection and model refinement may be impractical in every
environment. Approaches that utilize human demonstrations through manual
operation can aid in refinement and generalization, but often require
significant data collection efforts to generate enough demonstration data to
achieve satisfactory task performance. Interactive approaches allow for humans
to provide correction to robot action in real time, but intervention policies
are often based on explicit factors related to state and task understanding
that may be difficult to generalize. Addressing these challenges, we train a
lightweight interaction policy that allows robots to decide when to proceed
autonomously or request expert assistance at estimated times of uncertainty. An
implicit estimate of uncertainty is learned via evaluating the feature
extraction capabilities of the robot's visual navigation policy. By
incorporating part-time human interaction, robots recover quickly from their
mistakes, significantly improving the odds of task completion. Incorporating
part-time interaction yields an increase in success of 0.38 with only a 0.3
expert interaction rate within the Habitat simulation environment using a
simulated human expert. We further show success transferring this approach to a
new domain with a real human expert, improving success from less than 0.1 with
an autonomous agent to 0.92 with a 0.23 human interaction rate. This approach
provides a practical means for robots to interact and learn from humans in
real-world settings. | Ifueko Igbinedion, Sertac Karaman | 2023-05-25T22:08:54 | http://arxiv.org/abs/2305.16502v3 | # Learning When to Ask for Help:
###### Abstract
Robots operating alongside humans often encounter unfamiliar environments that make autonomous task completion challenging. Though improving models and increasing dataset size can enhance a robot's performance in unseen environments, dataset generation and model refinement may be impractical in every unfamiliar environment. Approaches that utilize human demonstration through manual operation can aid in generalizing to these unfamiliar environments, but often require significant human effort and expertise to achieve satisfactory task performance. To address these challenges, we propose leveraging part-time human interaction for redirection of robots during failed task execution. We train a lightweight help policy that allows robots to learn when to proceed autonomously or request human assistance at times of uncertainty. By incorporating part-time human intervention, robots recover quickly from their mistakes. Our best performing policy yields a 20 percent increase in path-length weighted success with only a 21 percent human interaction ratio. This approach provides a practical means for robots to interact and learn from humans in real-world settings, facilitating effective task completion without the need for significant human intervention.
vision-based navigation, imitation learning, reinforcement learning
## I Introduction
Reinforcement learning is a valuable tool for creating autonomous agents for robotic control. These algorithms are increasingly proficient, and have demonstrated the ability to learn more complex tasks, including navigation and interaction within an environment and responding to arbitrary linguistic commands from humans [1]. Furthermore, the development of simulation environments in conjunction with distributed computing allow for efficient online reinforcement learning using a variety of high-resolution sensors and the widespread availability of datasets for these tasks [2].
While these models demonstrate exceptional performance in simulation, challenges arise when deploying them in the real world. In particular, unfamiliar environments may be difficult to generalize, and collecting more data is often infeasible in real-world scenarios without existing models representative of target environments [3]. Furthermore, model size can often be a restriction when deploying on embedded robotic systems [4]. State-of-the-art vision-based models often use large neural encoders, making real-time throughput a difficult achievement on small embedded systems. Though real time mobile models have shown proficiency at learning supervised tasks, typically these models are not utilized within deep reinforcement learning systems, and performance degradation is a likely outcome when attempting to train and deploy models with encoders designed for mobile and embedded systems.
These challenges can be addressed by refining on-the-fly in target environments. Although small models may not perform
Fig. 1: The impact of part-time intervention on autonomous agent execution. During semi-autonomous execution, human actions (shown in blue) are requested by the agent to reduce uncertainty. When agents are empowered with the ability to decide when to ask for help, navigating new environments can be done with ease.
well in general, they can easily be improved by continued training with data representative of the target environment. Approaches have shown promise for refining policies in real-world environments, but the lack of a simulation platform presents a new set of challenges [5]. In simulation, collisions and execution failures can occur an infinite number of times with little concern, while in the real world, safety concerns and hardware longevity often make reinforcement learning impractical. Additionally, simulation environments provide an accurate and straightforward means of implementing reward design. However, in the real world, these rewards must be calculated using on-device sensors and environment feedback. Human assistance can mitigate these issues.
Methods that refine control policies from human demonstration have been successful in transferring new behaviors or refining known behaviors in new environments [6]. However, these methods require an oracle of human demonstrations, which are often derived from fully manual control. Generating enough demonstrations can be tiresome for human operators, and poor human control can be a bottleneck for training well-optimized agents. This pushes us towards hybrid control approaches that involve minimal human intervention.
In this work, we demonstrate how part-time human assistance in interactive robot learning processes can improve task execution without significant manual control or excessive failed episodes. We implement a policy for semi-autonomous control of robots using human input. This learned policy allows autonomous agents to decide when to proceed independently and when to ask for human assistance. We train this policy using both behavioral cloning using a small number of human demonstrations as well as with reinforcement learning using a simulated human expert. We apply this methodology to point navigation task in the Habitat simulation environment. The approach yields a 20% increase in path-weighted success, with no additional refinement of the agent's original policy.
## II Related Work
### _Mobile Machine Learning Models_
Over the past two decades, there has been increasing interest in developing machine learning models for small systems, given the ubiquity of mobile devices. Lightweight computer vision models such as MobileNet-V3 [7] and SqueezeNet [8] have demonstrated the ability to learn proficient models from large datasets without sacrificing throughput. These models can serve as mobile replacements for large visual encoders, such as larger ResNet models [9], that may run slowly on mobile systems. Furthermore, mobile models can be optimized for embedded systems to achieve speeds of over 400 frames per second, enabling inference of multiple models in real time [10].
Figure 2 compares the performance of various pre-trained PyTorch classification models on the ImageNet classification dataset [11]. Although smaller models experience a slightly noticeable performance degradation as compared to larger models, their model size is compressed by several orders of magnitude.
### _Robot Learning from Demonstration_
Reinforcement learning is commonly used to optimize policies for navigation tasks, but requires the use of simulation environments for training. In the absence of such environments, demonstration learning has become a powerful tool for transferring new behaviors to robots from human actions. Direct demonstration techniques for navigation tasks allow human agents to directly control learned agents, completing task objectives while enabling the agent to learn through supervised learning approaches, such as behavioral cloning [12]. Once a demonstration dataset is collected from human operation, the desired policy can be optimized to minimize the error between predicted and human actions within the demonstration. However, low-quality human demonstrations can negatively impact performance low-level demonstration techniques [13].
Instead of improving the quality of low-level demonstrations, alternative approaches address this weakness by incorporating high-level human input through symbolic representations [14, 15]. Spatial relationships between individuals and the environment can be mapped to predefined symbolic representations corresponding to high level actions. This approach has shown success in reproducing learned behaviors in novel situations, providing more generalized performance in comparison to pure imitation learning approaches. This approach still poses alternative challenges. Symbolic representations must be grounded in real-world meanings and the need for robots to interpret these actions at a conceptual level. Furthermore, in order to effectively map movement to action, it is necessary to limit the range of motion considered acceptable, and these motions must be recognized accurately in real time.
Fig. 2: Performance of visual classification models vs. model size in megabytes. Mobile models experience a marginally noticeable degradation in performance while compressing model size by multiple orders of magnitude.
### _Human-in-the-loop Robotic Planning_
When navigating unfamiliar or unstructured environments, adding humans into the planning loop can prevent catastrophic failure in unexpected scenarios. Simple approaches allow for humans to remotely supervise execution and assume full manual control of robot agents during perceived failure [16]. More sophisticated approaches incorporate modeling human behavior [17] or robot needs [18]. Learning directly from demonstration can be effective given the use of pre-trained models and the availability of high-performing expert demonstrators, but some tasks are ill-designed for a single human operator to [18] in particular tackles the multi-agent "reaching through clutter" problem. In this work multiple robots are supervised by a single human agent, and modeling determines which robot agent needs the most help at any point in time, dispatching the human agent to that robot. In this scenario, expert demonstration is impossible to achieve without multiple, coordinated operators. Instead, this work showed that modeling approaches for robots to interact with the single operator allowed for significant improvement in task performance, supporting the idea that approaches modeling part-time intervention (rather than full-time) are sufficient for performance improvement.
## III Approach
We implement a semi-autonomous system that includes two learned agent policies within a virtual simulation environment. Figure 3 summarizes this system design.
The simulation environment first provides the agent with observations that are processed through a shared encoder to provide features for both the agent and help policies. The human operator provides guidance upon request, and the agent interacts directly with the simulation environment until episode completion.
### _Simulation Platform_
Our chosen simulation environment is the AI Habitat platform [19]. Habitat is a simulation platform for embodied artificial intelligence, including datasets for a variety of navigation tasks grounded in vision and natural language. The navigation task of interest is point navigation [20], which involves providing an exact 2D location in the environment for a robot to navigate towards. Learned agents in this environment utilize low-level control commands that allow for forward motion and orientation changes.
### _Model Design and Training_
We utilize a two-policy approach with shared features to perform the navigation task while providing the opportunity for the agent to request help. For the agent model we utilize a pre-trained point navigation model provided by the Habitat Simulator that are trained on the Matterport3D dataset point navigation split [21]. This agent utilizes features from RGB imagery as well as point goal features from GPS and compass information (relative to the target destination) as input features. Imagery is passed through a ResNet50 encoder, then combined with point goal features before being passed through a recurrent network. This model was pre-trained using proximal policy optimization [22] and remains frozen throughout the training of the helper policy.
The help policy reuses encoder features from the agent navigation policy as well as point navigation features, in addition to an indicator of time since the last help request. The policy is a lightweight 2-layer perceptron, with each layer having a size of 64.
#### Iii-B1 Human Demonstration Training
We first train the help policy using expert human demonstrations. During these demonstrations, the human expert can at any point in time interrupt robot execution during perceived failure. Human interference is used as an indicator for requesting help, and the policy is trained using behavioral cloning. For each trained model, we take a few-shot approach, training for only 3 epochs to avoid over fitting our small dataset of 8 episodes.
#### Iii-B2 Simulating Human Experts
Gathering human demonstrations for behavioral cloning is costly. We also explore simulating a human operator using an expert path planner. We use the shortest path planner built into the Habitat Simulator as an expert planner. Instead of optimizing the policy using behavioral cloning, we design a reward function that maximizes future rewards while minimizing human contribution.
Fig. 3: Semi-autonomous system design. The simuation environment generates observations that are processed and given to the autonomous agent. This agent communicates with a help policy that returns a set of ideal human actions when requested. The agent then directly interacts with the environment using its own predictions in conjunction with part-time human direction.
Our reward function considers the instantaneous change in distance to the goal scaled by a factor proportional to the product of the number of help requests and the length of the current path provided by the human. We apply discounting to maximize future rewards. The help reward function is provided in Equation 1. The total reward combines the help reward with the current success weighted by path length and subtracts a scaled ratio of human actions during the current episode. The complete reward function is provided in Equation 2. We train the help policy using proximal policy optimization for 250,000 timesteps within the training simulation environment and evaluate alongside a human operator.
\[r_{help}(\theta,r_{nav})=\frac{1}{1+C_{r}c_{p}}\sum_{i=0}^{t}\frac{r_{nav}( \theta_{i})-r_{nav}(\theta_{i-1})}{1+\lambda_{d}} \tag{1}\]
Equation 1: Cumulative help reward function at time \(t\). \(r_{nav}\) is the distance-to-goal reward during point navigation. \(C_{r}\) is the total number of help requests during the current episode, \(c_{p}\) is the length of the current path if the human operator is in control. \(\lambda_{d}\) is a the discounting factor, which is set to 0.99 during training.
\[r_{total}=r_{help}(\theta,r_{nav})+r_{spl}-\lambda_{h}\frac{C_{h}}{C_{h}+C_{a}} \tag{2}\]
Equation 2: Total reward function combining the cumulative help reward with success weighted by path length (\(r_{spl}\)) and subtracting the number of human actions (\(C_{h}\)) divided by the sum of total human and agent actions (\(C_{h}+C_{a}\)). \(\lambda_{h}\) is the weight applied to the ratio of human actions, set to 0.5 during training.
### _Experimental Setup_
We evaluate a pre-trained autonomous point navigation agent with trained helper policies alongside multiple help policies trained using combinations of encoder features, point navigation goal features, and path length features. The point navigation goal feature is the instantaneous difference in distance from the navigation goal. The path length feature represents the length of the path since the last help request. Each help policy trained using behavioral cloning utilizes human demonstrations from 1% of the training split of the Habitat point navigation dataset (8 episodes) and evaluated on 10 episodes from the validation split. The help policy trained using a simulated expert executes at least 250,000 time steps, yielding around 3000 episodes. A human operator is present for each individual training episode for behavioral cloning and every evaluation episode.
## IV Results
We perform quantitative experiments comparing the impact of feature selection, human contribution and optimization technique on the performance of the complete semi-autonomous system. Our evaluation metrics are:
1. Success defined by achieving a distance of less than \(2\times agent\_width\) (0.2m) away from the goal point when executing the "stop" action
2. Success Weighted by Path Length (SPL) [23] which scales the success indicator by the actual path's deviation from the optimal path.
3. Human contribution, which is the ratio of the path affected by human interaction.
As a baseline, the fully autonomous agent (zero human contribution) is successful in navigating to the goal destination roughly 88% of the time on training set and 70% of the time on the validation set, with SPLs of 0.66 and 0.56 respectively.
### _Experiment 1: Human Contribution_
We first evaluate the effect of human contribution on the performance of the semi-autonomous system. We keep the help policy fixed, using a policy trained including all features. At each request for help, the human can provide an intermediate path that is at most \(M\) time steps long. We examine values of \(M\in\{5,25,50\}\). As expected, human contribution always improves performance against the baseline, with the lowest-performing semi-autonomous system achieving an overall success improvement of 20% and an SPL improvement of 8%. This system incorporates human intervention 30% of the time during validation episodes, limiting 5 time steps at each human interaction. Surprisingly, there are diminishing returns as we increase the number of time steps allowed during human intervention.
When we double the number of time steps from 25 to 50, we actually result in a 12% reduction in training success and a
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & & \multicolumn{4}{c}{**Training**} & \multicolumn{4}{c}{**Validation**} \\ \cline{3-8} Help Policy & Max Steps & SPL \(\uparrow\) & Success \(\uparrow\) & Human Contribution \(\downarrow\) & SPL \(\uparrow\) & Success \(\uparrow\) & Human Contribution \(\downarrow\) \\ \hline Fully Autonomous (Baseline) & - & 0.66 & 0.88 & 0.0 & 0.56 & 0.7 & - \\ \hline Point + Path + Encoder Features & 5 & 0.70 & 0.88 & **0.29** & 0.64 &.9 & **0.30** \\ Point + Path + Encoder Features & 25 & **0.9** & **1.0** & 0.37 & **0.81** & **1.0** & 0.367 \\ Point + Path + Encoder Features & 50 & 0.8 & 0.88 & 0.46 & 0.8 & 0.9 & 0.52 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Examining the impact of human contribution on the overall performance of the semi-autonomous system. At each request for help, the human can contribute for as many as \(M\) steps (as shown by the Max Steps column). A natural consequence of limiting the human contribution is a reduction in overall success, however, diminishing returns are shown at a maximum of 50 time steps.
10% reduction in validation success, with 10% and 1% drops in SPL, respectively. During validation, the system with \(M=25\) incorporated human intervention 36% of the time, while the system with \(M=50\) incorporated human intervention 52% of the time. These results are summarized in table I Though it could be presumed that increasing the amount of focused, expert human intervention would improve performance, the decrease in performance indicates that the human collaborator may experience fatigue over time or simply be less proficient at simple navigation tasks as compared to the autonomous agent. This could suggest that high level, planning-focused interaction may be optimal for semi-autonomous navigators.
### _Experiment 2: Feature Selection_
Next we evaluate the effect of selecting encoder features on model performance. As stated, possible features include visual features from the navigation agent's encoder, differential point goal features describing change in distance to destination, and path features describing time since last help request. Table II summarizes the results of training each of these policies. While each feature set outperforms the baseline by 18 to 32% percent, it is important to note the dimensionality of the feature set. The encoder returns features of length 512, while differential point goal features have a dimensionality of 2, and path features have a dimensionality of 1. However, encoder features perform the least optimally on the dataset.
Agents using the help policy trained using encoder features alone request help from human agents at a slower rate than the other models (about 6 percent less), suggesting higher visual confidence in its predicted direction. This result could also be explained due to the information provided by path and goal features instead of visual features.
### _Experiment 3: Human Simulation_
Finally we compare the results of training using expert demonstrations to the simulated shortest path expert. Table III shows our results. After training a help policy with a simulated expert, during human evaluation, the human contribution is reduced by 0.15 while maintaining a perfect success rate. Path length weighted success also experiences minimal degradation.
When inspecting paths visually, the simulated-expert policy tends to request help towards the beginning of execution and is conservative in its requests as the episode length increases. This can be explained by the reward function design which heavily penalizes multiple help requests during an episode. Figure 4 shows an example comparing the policy trained with demonstrations against the policy trained with a simulated expert.
### _Discussion_
#### Iv-D1 Diminishing returns for human action
Figure 5(a) shows examples of optimal policy execution. Ideally, we want minimal points of intervention that prevent deviation from the desired path. One interesting result from this work is that
\begin{table}
\begin{tabular}{c c c c} \hline \hline & \multicolumn{3}{c}{**Validation**} \\ \cline{2-4} Help Policy & SPL \(\uparrow\) & Success \(\uparrow\) & Human Contribution \(\downarrow\) \\ \hline Fully Autonomous & 0.56 & 0.7 & - \\ \hline Point + Path + Encoder & **0.88** & **1.0** & 0.367 \\ Point + Path & 0.812 & **1.0** & 0.358 \\ Encoder & 0.746 & **1.0** & **0.293** \\ \hline \hline \end{tabular}
\end{table} TABLE II: Comparison of policy performance given different feature selections. In each of these experiments, only 25 steps of instruction can be provided at a time.
Fig. 4: Policy trained with behavioral cloning (top) vs policy trained with a simulated human expert and hand-crafted reward function. The simulated expert tends to request help at the start of execution and is more conservative in its request strategy.
\begin{table}
\begin{tabular}{c c c c} \hline \hline & \multicolumn{3}{c}{**Validation**} \\ \cline{2-4} Help Policy & SPL \(\uparrow\) & Success \(\uparrow\) & Human Contribution \(\downarrow\) \\ \hline Fully Autonomous & 0.56 & 0.7 & - \\ \hline Expert Demonstrations & **0.88** & **1.0** & 0.367 \\ Expert Simulation & 0.84 & **1.0** & **0.21** \\ \hline \hline \end{tabular}
\end{table} TABLE III: Simulated expert policy performance results. The policy trained with a simulated expert experiences minimal degradation in performance while maintaining perfect success during evaluation with a human operator.
incorporating more human action does not always improve performance. A 500% increase in intervention path length (from 5 to 25) results in a 20% improvement in SPL, while doubling that length to 50 actually results in a decrease in performance. When visually examining this, we can see that this is related to human deviation from the optimal path. Figure 5(b) shows examples of wasteful execution from the human operator. Although the agent is requesting more help and moving towards the goal location, the wastefulness of the human path directly results in an increase in average path length.
Diminishing returns can also be observed when policies abuse the help request. Figure 5(c) shows examples of the path-goal feature policy, whose behavior led to a significant number of help requests and consequently a large proportion of human intervention during many evaluation episodes. The low-dimensionality of the path-goal features may be a direct result in the homogeneity of the feature representation, as movement in this environment is fixed to units of distance. This suggests that incorporating more variable features, like visual encoder features, may be necessary for optimal performance.
#### Iv-D2 Drawbacks of the help policy design
Sometimes relying solely on the robot's help policy to decide when to ask for help is not the best approach. This approach does not consider that the human operator can still provide valuable input during inference. The third episode in figure 5(c) shows an example of this behavior. This suggests that making the human a collaborator rather than a source of data may be a more effective implementation. This can be implemented by simply allowing for human intervention during inference as we did during training, but more nuanced approaches, including high level communication via language or gesture, could also be effective.
Additionally, separating the help policy from the navigation policy may not have been the best choice. Within our optimization paradigm, learning the help policy does not modify encoder weights. Optimizing visual features in conjunction with our help policy may also improve the quality of our visual encoder features within our target environment.
When training our simulated expert policy, we use proximal policy optimization with the reward function defined in equation 2. Though this reward design results in a more optimal ratio of human interaction, SPL is not improved in comparison to the policies trained with human demonstrations.
Lastly, we include a weak representation of past rewards through a simple differential reward function. However, we did not experiment with more sophisticated reward designs that could find more optimal policies. Incorporating more complex recurrent models and inputting raw observations may yield better policy performance.
## V Conclusion
In this work, we presented a semi-autonomous system for interactive point-goal navigation. Motivated by smaller robotics, our goal was to improve real-time model performance through lightweight, part-time human interaction. We developed an external help policy that re-purposes visual navigation features as well as point-goal and path length features to learn when to ask for help in unfamiliar environments.
Our best performing policy using this approach yielded 100% success on our validation set, with over a 20% increase in success-weighted path length compared to the baseline (0.84 vs. 0.64) with only 21% intervention time. This approach shows promise for implementing lightweight machine learning systems that leverage part-time human operators for improved autonomous performance.
## VI Acknowledgment
Research was sponsored by the United States Air Force Research Laboratory and the Department of the Air Force Artificial Intelligence Accelerator and was accomplished under Cooperative Agreement Number FA8750-19-2-1000. The
Fig. 5: Examples of semi-autonomous execution. Human actions shown in blue.
views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Department of the Air Force or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
| ロボットと人間が協力して動作する際には、しばしば不慣れな環境が発生し、自律的なタスクの完了を困難にする。モデルの改善やデータセットのサイズ増加は、未知の環境でロボットのパフォーマンスを向上させる可能性があるが、データ収集とモデルの調整は、常に実現可能ではない。人間による示唆を用いた方法を活用することで、調整と汎用性を向上させることができるが、十分な示唆データの収集には多くの時間と労力が必要になる。リアルタイムで人間がロボットの動作を修正できるインタラctiveなアプローチは、ロボットの行動に人間が介入する機会を与えるが、介入のポリシーは、状態やタスクの理解に関連する明示的な要因に基づいているため、汎用性に欠ける可能性がある。これらの課題に対処するため、ロボットは、不確実性のある時を予測して、人間にアドバイスを求めることができるように、軽度のインタラクション |
2307.00519 | Image Background Serves as Good Proxy for Out-of-distribution Data | Out-of-distribution (OOD) detection empowers the model trained on the closed
image set to identify unknown data in the open world. Though many prior
techniques have yielded considerable improvements in this research direction,
two crucial obstacles still remain. Firstly, a unified perspective has yet to
be presented to view the developed arts with individual designs, which is vital
for providing insights into future work. Secondly, we expect sufficient natural
OOD supervision to promote the generation of compact boundaries between the
in-distribution (ID) and OOD data without collecting explicit OOD samples. To
tackle these issues, we propose a general probabilistic framework to interpret
many existing methods and an OOD-data-free model, namely
\textbf{S}elf-supervised \textbf{S}ampling for \textbf{O}OD \textbf{D}etection
(SSOD). SSOD efficiently exploits natural OOD signals from the ID data based on
the local property of convolution. With these supervisions, it jointly
optimizes the OOD detection and conventional ID classification in an end-to-end
manner. Extensive experiments reveal that SSOD establishes competitive
state-of-the-art performance on many large-scale benchmarks, outperforming the
best previous method by a large margin, \eg, reporting \textbf{-6.28\%} FPR95
and \textbf{+0.77\%} AUROC on ImageNet, \textbf{-19.01\%} FPR95 and
\textbf{+3.04\%} AUROC on CIFAR-10, and top-ranked performance on hard OOD
datasets, \ie, ImageNet-O and OpenImage-O. | Sen Pei | 2023-07-02T09:02:53 | http://arxiv.org/abs/2307.00519v2 | # End-to-End Out-of-distribution Detection
###### Abstract
Out-of-distribution (OOD) detection empowers the model trained on the closed set to identify unknown data in the open world. Though many prior techniques have yielded considerable improvements, two crucial obstacles still remain. Firstly, a unified perspective has yet to be presented to view the developed arts with individual designs, which is vital for providing insights into the related directions. Secondly, most research focuses on the post-processing schemes of the pre-trained features while disregarding the superiority of end-to-end training, dramatically limiting the upper bound of OOD detection. To tackle these issues, we propose a general probabilistic framework to interpret many existing methods and an OOD-data-free model, namely **S**elf-supervised **S**ampling for **O**OD **D**etection (**SSOD**), to unfold the potential of end-to-end learning. SSOD efficiently exploits natural OOD signals from the in-distribution (ID) data based on the local property of convolution. With these supervisions, it jointly optimizes the OOD detection and conventional ID classification. Extensive experiments reveal that SSOD establishes competitive state-of-the-art performance on many large-scale benchmarks, where it outperforms the most recent approaches, such as KNN (Sun et al., 2022), by a large margin, e.g., **48.99% \(\rightarrow\) 35.52%** on _SUN_ at FPR95.
## 1 Introduction
Out-of-distribution (OOD) detection has been recognized as crucial for the deployment of machine learning systems in reality, _e.g._, computer vision applications. Traditional neural networks excel at handling in-distribution (ID) data, which is similar to the training samples. However, real-world scenarios cannot always adhere to the independent and identically distributed rules, _i.e._, the _i.i.d._ conditions. That means the input data can vary significantly from the training images in terms of domains and categories. Thus, extensive efforts have been devoted to detecting whether input samples are OOD, ultimately bolstering classifier stability and reliability.
Current OOD detection methods primarily rely on the perspective of statistical difference, _i.e._, observing distinctions between the pre-trained features of ID/OOD samples. These methods tend to use heuristic tricks to rule out OOD data in a two-stage manner, _i.e._, pre-training and post-processing, which suffer from the following drawbacks compared to the end-to-end fashion. Firstly, the frozen model weights are obtained on the ID classification task with limited OOD supervision, and therefore, the extracted features inherently carry bias which is not distinguishable enough for identifying OOD data (cf. Figure 2). Secondly, the two-stage design yields poor scalability and efficiency since it is not suitable for scenarios without pre-trained models, _e.g._, given a practical application and its
corresponding dataset, a two-stage model takes comparable training tax compared to the end-to-end method while only obtaining biased features and fair to good detection results.
To tackle the issues above, this paper interprets the OOD detection task with a unified probabilistic framework (cf. Section 3), which can include many previous individual designs widely. To be concrete, our framework divides the multi-category classification problem into two tasks: conventional ID classification and OOD detection. According to the theoretical analysis, the deficiency encountered by traditional neural networks in identifying OOD data arises from the absence of a critical component, _i.e._, an OOD factor that estimates the likelihood of images belonging to the in-distribution. Furthermore, sailing from this general foundation, we present **S**elf-supervised **S**ampling for **O**OD **D**etection (**SSOD**), an end-to-end trainable framework **w/o** resorting to explicit OOD annotations (cf. Section 4). In contrast to the observed paths that huge synthetic OOD features, SSOD directly samples **real** OOD supervision from the background of training images by itself, _i.e._, self-supervised, getting rid of the constraints resulted by the lack of labeled OOD data and the deviation introduced within the OOD feature syntheses stage. Extensive experiments demonstrate that the jointly end-to-end training manner significantly improves the OOD detection performance and guides the model to focus more on the object-discriminative characters instead of the meaningless background information (cf. Figure 3). The major contributions of this paper are summarized as follows.
* We establish a general probabilistic framework to interpret the OOD detection, where various OOD methods can be analyzed comprehensively, with main differences and key limitations clearly identified.
* To mitigate the negative impacts from pre-trained features, we design an end-to-end trainable model, namely **S**elf-supervised **S**ampling for **O**OD **D**etection (**SSOD**), to sample real OOD signals from the ID images. SSOD can avoid the labor-intensive work of labeling/cleaning sufficient OOD images.
* SSOD is evaluated across various benchmarks and model architectures for OOD detection, where it outperforms current state-of-the-art approaches by a large margin, _e.g._, improving KNN (Sun et al., 2022) w/ and w/o contrastive learning with -17.71% and -21.97% FPR95 on Places (Zhou et al., 2018), and Energy (Liu et al., 2020) with +2.01% AUROC and -15.10% FPR95 on SUN (Xiao et al., 2010). The scalability and superiority of SSOD promise its potential to be a starting point for solving the OOD detection problem.
## 2 Related work
We give a brief overview of the observed paths in promoting the detection of OOD data.
**Score-based posterior calibration.** This line of research aims to find differences between the ID and OOD data, thus designing model-specific discriminative functions to identify the OOD samples. The related work includes ODIN (Liang et al., 2018), LogitNorm (Wei et al., 2022), GradNorm (Huang et al., 2021), ReAct (Sun et al., 2021), Energy (Liu et al., 2020), and CIDER (Ming et al., 2023), to name a few. Generally, these methods are usually pre- or post-processing schemes that demonstrate
Figure 1: Self-supervised Sampling for Out-of-distribution Detection (SSOD). We adopt a self-supervised sampling scheme to train the OOD discrimination branch, _i.e._, the OOD head with the supervised signals injected by the CLS head.The image blocks in green/red/gray are ID/OOD/invalid signals identified by the CLS head. SSOD jointly trains these two branches.
no need for retraining the neural networks. Although these methods above report considerable performance improvements and sometimes are training efficient, they do not necessarily lead to significant generalization ability. For example, ReAct (Sun et al., 2021) investigates the distinct behaviors of ID and OOD data after \(\mathrm{ReLU}\) function, and therefore, it fails to perform on architectures adopting other activations, such as GELU, \(\mathrm{Sigmoid}\), and \(\mathrm{Tanh}\), _etc_. Similarly, the ODIN (Liang et al., 2018) investigates post-processing schemes specially designed for Softmax, _i.e._, the temperature scaling. These specific designs promote OOD detection but limit the model's scalability. In contrast, our SSOD doesn't suffer from this limitation as it addresses the OOD detection directed by Bayes' theorem, which holds in general scenarios.
**Auxiliary supervision from synthetic OOD data.** The lack of OOD supervision is a critical factor leading to unsatisfactory performance in OOD detection. Thus, significant interest has been raised in generating synthetic OOD data. Existing approaches tackling this issue can be roughly divided into two manners, which are feature and image generation. The feature generation manner samples OOD features from the ID boundary, such as VOS (Du et al., 2022), or generates them using GAN, such as BAL (Pei et al., 2022). In contrast, the image generation yields more expensive training tax since it directly generates the OOD images, such as Conf (Lee et al., 2018), SBO (Moller et al., 2021), MG-GAN (Dendorfer et al., 2021), NAS-OOD (Bai et al., 2021), CODEs (Tang et al., 2021), and VITA (Chen et al., 2022), _etc_. In summary, existing methods either employ unrealistic OOD supervision as they only consider the approximated feature space or are costly due to the generation in the original image space. Unlike prior arts, our proposed SSOD avoids both limitations by utilizing the universal local property of neural networks, extracting realistic OOD supervision from the ID images without generation cost.
## 3 Probabilistic framework for OOD detection
In this section, we first introduce the unified probabilistic OOD framework and then revisit existing OOD detection methods from the view.
### Probabilistic OOD detection
The problem of OOD detection can be defined in various ways. In this paper, we formalize the task as a binary classification problem. Concretely, we consider two **disjoint** distributions on the data and label space, denoted as \(\mathcal{S}_{\mathrm{ID}}\times\mathcal{Y}_{\mathrm{ID}}\) and \(\mathcal{S}_{\mathrm{OOD}}\times\mathcal{Y}_{\mathrm{OOD}}\), representing the ID distribution and OOD distribution respectively. We note that \(\mathcal{Y}_{\mathrm{ID}}\) and \(\mathcal{Y}_{\mathrm{OOD}}\) have no overlap, _i.e._, \(\mathcal{Y}_{\mathrm{ID}}\cap\mathcal{Y}_{\mathrm{OOD}}=\varnothing\). OOD detection aims to train a model, which can effectively distinguish the source distribution of a given image \(x\). Moreover, for \(x\in\mathcal{S}_{\mathrm{ID}}\times\mathcal{Y}_{\mathrm{ID}}\), it is also expected to correctly predict its corresponding category with a classifier denoted as \(f(\cdot)\).
Supposing \(x\) is a given image sampled from the open image distribution \(\mathcal{S}=\mathcal{S}_{\mathrm{ID}}\cup\mathcal{S}_{\mathrm{OOD}}\), and \(\mathcal{W}_{1},\mathcal{W}_{2},...,\mathcal{W}_{M}\) are the sets of the \(M\) ID categories, we can obtain the following formula to compute \(P(x\in\mathcal{W}_{i}|x\in\mathcal{S})\) based on the law of total probability,
\[P(x\in\mathcal{W}_{i}|x\in\mathcal{S}_{\mathrm{ID}})P(x\in\mathcal{S}_{ \mathrm{ID}}|x\in\mathcal{S})+P(x\in\mathcal{W}_{i}|x\in\mathcal{S}_{\mathrm{ OOD}})P(x\in\mathcal{S}_{\mathrm{OOD}}|x\in\mathcal{S}). \tag{1}\]
As \(P(x\in\mathcal{W}_{i}|x\in\mathcal{S}_{\mathrm{OOD}})=0\), Eqn.(1) leads to a practical conclusion:
\[P(x\in\mathcal{W}_{i}|x\in\mathcal{S})\triangleq P(x\in\mathcal{W}_{i}|x\in \mathcal{S}_{\mathrm{ID}})P(x\in\mathcal{S}_{\mathrm{ID}}|x\in\mathcal{S}), \tag{2}\]
where \(\triangleq\) indicates the conditional equation. The conditional probability \(P(x\in\mathcal{W}_{i}|x\in\mathcal{S}_{\mathrm{ID}})\) in Eqn.(2) is exactly the classification problem of ID data, termed as **ID factor**. \(P(x\in\mathcal{S}_{\mathrm{ID}})\), namely **OOD factor**, is the OOD detection task. Generally, OOD detection techniques aim to optimize the OOD factor without affecting the ID classification performance.
### Revisit OOD methods with the probabilistic view
We interpret several classic OOD detection techniques from the perspective of our proposed probabilistic framework and find that most OOD detection methods hold \(P(x\in\mathcal{W}_{i}|x\in\mathcal{S}_{\mathrm{ID}})=f_{i}(x)\), _i.e._, the _i-th_ dimension of the classifier's output activated by Softmax function. Thus, the crucial point is how to compute the OOD factor \(P(x\in\mathcal{S}_{\mathrm{ID}}|x\in\mathcal{S})\).
**Methods based on logits**, _e.g._, Max-Softmax Probability (MSP) (Hendrycks, 2017), which directly employs the Softmax output of classifiers as the ID/OOD score, aiming to distinguish them with classification confidence. Concretely, given image \(x\), MSP uses the following expressions to depict the procedure of OOD detection:
\[x\to f(\cdot)\to\begin{cases}x\in\mathcal{S}_{\text{OOD}},&\max f(x)<\gamma\\ x\in\mathcal{S}_{\text{ID}},&\max f(x)\geq\gamma\end{cases}. \tag{3}\]
Intuitively, MSP expects the classifier \(f(\cdot)\) to assign higher confidence, _i.e._, \(\max f(x)\) to ID samples and lower of that to the OOD. Obviously, for MSP, the OOD factor is built as follows:
\[P(x\in\mathcal{S}_{\text{ID}}|x\in\mathcal{S})=P(\max f(x)\geq\gamma). \tag{4}\]
**Methods based on features** try to distinguish ID/OOD based on their deep features extracted by the backbone termed as \(h(\cdot)\), including ReAct (Sun et al., 2021), BAL (Pei et al., 2022), VOS (Du et al., 2022), and KNN (Sun et al., 2022), _etc_. Taking ReAct as an example, it builds the OOD factor \(P(x\in\mathcal{S}_{\text{ID}})\) in a hard threshold manner with linear projection, depicted as:
\[P(x\in\mathcal{S}_{\text{ID}}|x\in\mathcal{S})=P(\mathbf{W}^{\top}\mathrm{ ReAct}(h(x),c)+\mathbf{b}\geq\gamma), \tag{5}\]
where \(\mathbf{W}^{\top}\) and \(\mathbf{b}\) are the weight matrix and bias vector, \(\mathrm{ReAct}(h(x),c)=\min\{h(x),c\}\) is an element-wise truncation function, and \(\gamma\) is a hard threshold. Instead of the OOD-syntheses-free schemes like ReAct and KNN, BAL and VOS generate ID/OOD supervision in the feature space to optimize the ID/OOD classifier with \(P(x\in\mathcal{S}_{\text{ID}}|x\in\mathcal{S})=\sigma(d(h(x))\), where \(d(\cdot)\) is a discriminator.
In summary, most OOD methods approximate the OOD factor \(P(x\in\mathcal{S}_{\text{ID}}|x\in\mathcal{S})\) by \(P(f(x)\in f(\mathcal{S}_{\text{ID}})|f(x)\in f(\mathcal{S}))\) as MSP, or \(P(h(x)\in h(\mathcal{S}_{\text{ID}})|h(x)\in h(\mathcal{S}))\) like ReAct, BAL, VOS, and KNN. We note here that \(f(x)\) is the Softmax output, and \(h(x)\) is the feature extracted by the backbone. Nevertheless, there is a significant bias introduced in \(f(\mathcal{S})\) and \(h(\mathcal{S})\) as \(f(\cdot)\) and \(h(\cdot)\) are trained for the ID classification. It adversely affects the discrimination of ID and OOD data (cf. Figure 2, left). Furthermore, generated OOD signals in the feature space (_e.g._, BAL and VOS) do not necessarily lead to the existence of a corresponding natural OOD image. Consequently, its effectiveness in practical scenarios of the open world may be limited. To remove these obstacles, we propose to _sample OOD supervision from the ID images and optimize the OOD factor directly_.
## 4 Self-supervised Sampling for OOD Detection (SSOD)
In this section, we propose a **S**elf-supervised **S**ampling solution to tackle **O**OD **D**etection problem, termed as **SSOD**, which can estimate the OOD factor directly without resorting to explicit OOD samples. The design of SSOD is inspired by the local property, _i.e._, _locality2_, from convolution networks, as discussed below.
Figure 2: The t-SNE visualizations of deep features for the iNaturalist dataset. **Left**: features extracted by the conventional ResNet-50. **Right**: features extracted by our SSOD. The green/red dots represent ID/OOD features, _i.e._, the ImageNet/iNaturalist. SSOD improves the feature’s ID/OOD discriminability notably. The overlap of ID/OOD features reflects the false positive rate (FPR).
### Inspiration of SSOD
Prior studies, _e.g._, Redmon et al. (2016) and Carion et al. (2020), have demonstrated that traditional neural networks are capable of retaining spatial information. Specifically, a position of the feature map reflects the corresponding position in the input image. Thus, we may expect the potential to extract the background information from the feature maps, _which can be regarded as the natural OOD samples_. However, a question arises: _How to design an OOD block sampler to select the positions representing background information from the feature maps?_
In Figure 3 (a), the ResNet-50 trained on ImageNet downsamples the input image and yields a corresponding feature map (\(\mathbb{R}^{H\times W\times C}\)), where each image block is projected to a feature vector (\(\mathbb{R}^{C}\)) located at the corresponding position. The classification head reports the category for each feature vector. We highlight the correctly classified blocks with over 40% confidence in Figure 3 (a) and (b). The results suggest that for blocks contained in the main objects, the confidence scores are much higher, while for the backgrounds far away from the main objects, their confidence scores are extremely low, _i.e._, at least lower than 40%. _The confidence distribution on the feature map roots from the limited receptive fields of the cascaded convolution layers, i.e., each position of the feature map is only accessible to a local region of the input image._ Inspired by this observation, we posit the **S**elf-supervised **S**ampling for **O**OD **D**etection (SSOD) based on the confidence score of each block.
### Formulation of SSOD
For a feature map within \(\mathbb{R}^{C\times H\times W}\) (_i.e._, the channel, height, and width) produced by the neural networks, we can apply the classifier along spatial axes and obtain a confidence score map within \(\mathbb{R}^{M\times H\times W}\), where \(M\) is the number of categories in ID data. The blocks with a low confidence score, _i.e._, lower than 5%, on the ground-truth category, are recognized as OOD samples as highlighted in Figure 3 (c) and (d). Symmetrically, an ID block sampler selects some blocks with high confidence scores, _i.e._, greater than 95%, as the ID samples (cf. Figure 3, the green blocks) besides the global average pooling feature, which helps to balance the positive (ID) and negative (OOD) samples.
Formally, we use \(h(\cdot)\), \(f_{cls}(\cdot)\), and \(f_{ood}(\cdot)\) to denote the backbone removing the classification head, the multi-category classification head, and the binary ID/OOD discrimination head, respectively. Given an input image \(x\) with label \(y\), \(X^{C\times H\times W}=h(x)\) is the feature map. The prediction result of the classification model is:
\[\hat{y}=f_{cls}(\mathrm{GAP}(h(x)))=f_{cls}(\mathrm{GAP}(X^{C\times H\times W })), \tag{6}\]
where \(\mathrm{GAP}\) is the global average pooling operation on the spatial dimensions. Similarly, when applying \(f_{cls}\) on each block of \(X^{C\times H\times W}\) without pooling operation, we can get the confidence score map \(\overline{y}^{MHW}=f_{cls}(X^{C\times H\times W})\) within \(\mathbb{R}^{M\times H\times W}\). Moreover, we pick the confidence along
Figure 3: **(a)**: The image blocks in blue are recognized as penguins with over 40% confidence by the vallina ResNet-50 classifier. **(b)**: The SSOD counterpart of (a). **(c)**: The image blocks in green and red are sampled as ID and OOD supervision with over 95% confidence by ResNet. **(d)**: The SSOD counterpart of (c). It is noteworthy that SSOD is motivated by the local property of conventional networks as illustrated in (a), and the results depicted in (b) reveal that the local property is enhanced by the joint training.
the target axis, _e.g._, if the target label of \(x\) is \(j\), then we collect the confidence along the _j-th_ axis of \(M\), yielding a target confidence map within \(\mathbb{R}^{H\times W}\), _i.e._, \(\overline{y}^{HW}\). We use the ID/OOD sampler to select blocks with high/low scores on the target label as the ID/OOD supervision. Concretely, for \(i\in\{1,2,3,...,HW\}\), we obtain the following self-supervised OOD labels from cls head:
\[y_{i}^{ood}=\begin{cases}0,&\overline{y}_{i}^{HW}<1-\gamma\\ 1,&\overline{y}_{i}^{HW}\geq\gamma\\ \text{N/A},&1-\gamma\leq\overline{y}_{i}^{HW}<\gamma\end{cases}, \tag{7}\]
where \(\overline{y}^{HW}\) indicates the predicted confidence of each image block belonging to the target category, and \(\gamma\) is a confidence threshold, _e.g._, 95%. Remind that the image blocks assigned with the positive label are highlighted as green in Figure 3, and the negative blocks are marked using red. We drop the left image blocks (_i.e._, N/A in Eqn.(7)), and therefore, they provide no ID/OOD signals during the training. With the OOD Head, we obtain the ID/OOD prediction (\(\hat{y}^{ood}\)):
\[\hat{y}^{ood}=f_{ood}(X^{C\times H\times W}). \tag{8}\]
Since only a part of the image blocks is selected as ID/OOD supervision in Eqn.(7), consequently, the loss is performed on the corresponding predicted results in Eqn.(7) and Eqn.(8). The overall objective of SSOD is formulated with the cross entropy loss (CE):
\[\mathcal{L}=\mathrm{CE}(\hat{y},y)+\alpha\mathrm{CE}(\hat{y}^{ood},y^{ood}), \tag{9}\]
where \(\alpha\) is a balance parameter. During the training/inference phase, the OOD factor of input images can be calculated as follows:
\[P(x\in\mathcal{S}_{\mathbb{ID}})=\mathrm{Sigmoid}(f_{ood}(\mathrm{GAP}(X^{C \times H\times W}))), \tag{10}\]
where \(\mathrm{Sigmoid}\) function is used to predict the probability of input image belonging to the ID data. With the proposed SSOD above, we can train the OOD detection branch end-to-end with realistic OOD supervisions sampled from the blocks of ID data as illustrated in Figure 1.
## 5 Experiments
We address the following problems in this section: 1) How does SSOD perform on OOD detection benchmarks? 2) Whether SSOD is stable under different hyper-parameter settings? 3) Whether SSOD is generalizable across different model architectures?
### Experimental setup
We give a brief introduction of our employed datasets and the training parameters. The detailed information is attached in Appendix.
**Datasets.** We perform experiments on ImageNet (Russakovsky et al., 2015) and CIFAR-10 (Krizhevsky et al., 2009). For ImageNet, we follow the settings from Sun et al. (2022) and employ iNaturalist (Horn et al., 2018), SUN (Xiao et al., 2010), Places (Zhou et al., 2018), and Textures (Cimpoi et al., 2014) as the OOD images. For CIFAR-10, we select SVHN (Netzer et al., 2011), LSUN (Yu et al., 2015), iSUN (Xu et al., 2015), Places, and Textures as the OOD images. Images in CIFAR-10 and ImageNet are resized and cropped to \(224\times 224\).
**Training and evaluation.** We use ResNet-50 (He et al., 2016) as our backbone and train in a total of 300 epochs. No complicated data augmentation schemes are used except for the RandomResizedCrop. The learning rate starts from 1e-4 and halves every 30 epochs. We optimize all parameters using the default gradient descent method. \(\alpha\) is set to 1.5 by default. We report the false positive rate of the OOD dataset when the true positive rate of ID images is 95%, _i.e._, FPR95. We also provide the area under the receiver operating characteristic curve (AUROC) and classification accuracy of ID images (ID ACC) for comparison. We keep the quantity of ID/OOD data consistent.
**Selection of comparable techniques.** We choose both the classic and latest methods in dealing with OOD detection for comparisons. With regard to the classic schemes, we select the MSP (Hendrycks, 2017), MaDist (Lee et al., 2018), ODIN (Liang et al., 2018), GODIN (Hsu et al., 2020), CSI (Tack et al., 2020), and MOS (Huang and Li, 2021). Besides, we also use the Energy (Liu et al., 2020),
which is the representative of the score-based calibration method, and the KNN (Sun et al., 2022), which is one of the latest schemes, as our comparable methods during the experiments. ResNet-18 and ResNet-50 (He et al., 2016) are chosen as the backbone of CIFAR-10 and ImageNet. SSOD employs the pre-trained ResNet-50 with a classification accuracy of 76.13% released by the official PyTorch community. We note that KNN (Sun et al., 2022) has two different versions, _i.e._, w/ and w/o contrastive learning. For a fair comparison with other methods, we employ **no** contrastive learning tricks for all comparable techniques.
### Comparisons with state-of-the-arts
We report the main results and answer the first question proposed at the beginning of Section.5, _i.e._, the performance of SSOD on OOD benchmarks. After that, we analyze the failure cases of SSOD.
**OOD detection results on ImageNet.** We use natural images not included in ImageNet (Russakovsky et al., 2015) as the OOD set, such as iNaturalist (Horn et al., 2018), SUN (Xiao et al., 2010), and Places (Zhou et al., 2018). We randomly select about 10k OOD images for each dataset following Sun et al. (2022). All methods use **no** contrastive loss during the training. The detection results are shown in Table 1. A part of the results come from Sun et al. (2022).
From the results depicted in Table 1, we can notice that SSOD reduces the false positive rate (FPR95) over **13.21%** and improves the AUROC about **1.63%** on average. Specifically, on iNaturalist (Horn et al., 2018), which consists of natural landscape images, SSOD significantly reduces the FPR95 by over 15.96%, establishing the competitive state-of-the-art performance. Considering the Places (Zhou et al., 2018), which contains pictures such as the creek, field, and urban city, the objects included in these images have high similarity with that in ImageNet, and we argue that this phenomenon contributes to the lower improvements on this dataset. Consistently, the improvements of AUROC are also considerable on iNaturalist (Horn et al., 2018) and SUN (Xiao et al., 2010), evidencing the efficiency of our proposed SSOD.
**OOD detection results on CIFAR-10.** Since images appearing in CIFAR-10 (Krizhevsky et al., 2009) are smaller than that in ImageNet (Russakovsky et al., 2015), _i.e._, \(32\times 32\), we use ResNet-18 (He et al., 2016) as the backbone for all comparable methods. Just as before, we use no contrastive loss during the training. Since SSOD extracts background information from the last feature maps and images in CIFAR-10 are too small, we resize the ID and OOD data to \(224\times 224\) with RGB channels, yielding bigger feature maps. The experimental results are depicted in Table 2. The last column of Table 2 demonstrates the comparison between SSOD and the previous methods. We can notice that on OOD images such as SVHN (Netzer et al., 2011), LSU (Yu et al., 2015), and iSUN (Xu et al., 2015), SSOD reports comparable performance as the best previous schemes with a marginal drop, _i.e._, less than **4.06%**. On large-scale datasets such as Textures (Cimpoi et al., 2014) and Places (Zhou et al.,
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{_iNaturalist_} & \multicolumn{2}{c}{_SUN_} & \multicolumn{2}{c}{_Places_} & \multirow{2}{*}{\(\uparrow\) ID ACC} \\ \cline{2-2} \cline{5-8} & \(\downarrow\) FPR95 & \(\uparrow\) AUROC & \(\downarrow\) FPR95 & \(\uparrow\) AUROC & \(\downarrow\) FPR95 & \(\uparrow\) AUROC & \\ \hline MSP & 54.99 & 87.74 & 70.83 & 80.86 & 73.99 & 79.76 & **76.13** \\ MaDist & 97.00 & 52.65 & 98.50 & 42.41 & 98.40 & 41.79 & 75.08 \\ ODIN & 47.66 & 89.66 & 60.15 & 84.59 & 67.89 & 81.78 & 75.08 \\ GODIN & 61.91 & 85.40 & 60.83 & 85.60 & 63.70 & 83.81 & 70.43 \\ Energy & 55.72 & 89.95 & 59.26 & 85.89 & 64.92 & 82.86 & 75.08 \\ KNN (w/o) & 59.08 & 86.20 & 69.53 & 80.10 & 77.09 & 74.87 & 75.08 \\ KNN (w/) & 30.18 & 94.89 & 48.99 & 88.63 & 59.15 & 84.71 & 79.10 \\ \hline SSOD (w/o) & **31.70** & **92.28** & **44.16** & **87.90** & **55.12** & **84.37** & 75.78 \\ SSOD (w/) & 20.32 & 95.28 & 35.52 & 92.06 & 41.44 & 89.38 & 75.12 \\ \hline \(\bigtriangleup\) & \(15.96^{*}\) & \(2.33^{*}\) & \(15.10^{*}\) & \(2.01^{*}\) & \(8.58^{*}\) & \(0.56^{*}\) & 0.35 \\ \hline \hline \end{tabular}
\end{table}
Table 1: OOD detection results on ImageNet. \(\downarrow\) **indicates lower is better, \(\uparrow\) means greater is better.** We highlight the best and second results using bold and underline. \(\bigtriangleup\) indicates the difference between SSOD and the best previous art, \({}^{*}\) indicates our improvements. We use (w/) and (w/o) to indicate using and without using the supervised contrastive learning.
2018), SSOD yields significant performance improvements compared to the current state-of-the-art techniques, specifically, reporting about **20.68%** performance gain (FPR95) on Places (Zhou et al., 2018). Besides, with regard to the overall OOD detection ability, we can see that SSOD reduces the false positive rate over **3.84%** on the aforementioned five datasets on average and improves the AUROC about **0.78%**, which can be established as a new competitive OOD detection scheme.
**Analysis of the failure cases.** We tell the failure cases encountered by SSOD and point out its limitations. Recall that SSOD extracts OOD information from the background of training images and employs them as the proxy of OOD characters, revealing the potential of suffering limited diversity of the OOD supervision if the training images are not diverse. This phenomenon will be perceived if the OOD data is totally different in domains as the training images. For example, the training images are natural scenes, while the testing OOD data is synthetic color blocks or textures. To check this issue, We train the SSOD on ImageNet (Russakovsky et al., 2015) while testing it on Textures (Cimpoi et al., 2014). From the results depicted in Table 3, though SSOD achieves top-ranked performance, it is worse than KNN (Sun et al., 2022), increasing the FPR95 by about 38.46%. This issue is caused by the overlap between ImageNet and Textures. Concretely, many images in the Textures carry vital symbols of objects included in ImageNet (cf. Figure 4). These overlaps lead to the inefficiency of SSOD during the comparison in Table 3.
### Ablation Study
We tackle the last two problems posed at the beginning of this section, _i.e._, the stability and scalability of our proposed SSOD.
**Ablations on the hyper-parameter \(\alpha\).** Recall that \(\alpha\) controls the importance of loss generated by the OOD Head, balancing classifiers' classification performance and OOD detection ability. We employ the CIFAR-10 (Krizhevsky et al., 2009) and Places (Zhou et al., 2018) as the ID and OOD data to validate the stability of \(\alpha\). SSOD uses ResNet-18 as the backbone. From the ablations depicted in Table 4, we notice that with the increasing of \(\alpha\), the classifier detects OOD input better, while the ID ACC is gradually descending. We expect to boost the robustness of classifiers while not affecting the model's performance. Therefore, the \(\alpha\) equals 1.5 in our experiments.
**OOD detection across different model architectures.** We use ImageNet and Places as the ID and OOD data. Considering the deployment on portable devices, we test both the conventional and lite models, such as ResNet-50 (He et al., 2016), DenseNet-121 (Huang et al., 2017), RegNet (Y-800MF) (Radosavovic et al., 2020), and MobileNet (Howard et al., 2019). Compared to Table 1, all methods shown in Table 6 achieve state-of-the-art performance, evidencing the scalability of SSOD.
\begin{table}
\begin{tabular}{l l c c c c c c c c c} \hline \hline \multirow{2}{*}{OOD} & \multirow{2}{*}{Metrics} & \multicolumn{8}{c}{Methods} \\ \cline{3-11} & & MSP & MaDist & ODIN & GODIN & Energy & CSI & KNN & SSOD & \(\triangle\) \\ \hline \multirow{3}{*}{_SVHN_} & \(\downarrow\) FPR95 & 59.66 & **9.24** & 20.93 & 15.51 & 54.41 & 37.38 & 24.53 & 13.30 & 4.06 \\ & \(\uparrow\) AUROC & 91.25 & **97.80** & 95.55 & 96.60 & 91.22 & 94.69 & 95.96 & 97.66 & 0.14 \\ \hline \multirow{2}{*}{_LSUN_} & \(\downarrow\) FPR95 & 45.21 & 67.73 & 7.26 & **4.90** & 10.19 & 5.88 & 25.29 & 7.51 & 2.61 \\ & \(\uparrow\) AUROC & 93.80 & 73.61 & 98.53 & **99.07** & 98.05 & 98.86 & 95.69 & 98.72 & 0.35 \\ \hline \multirow{2}{*}{_iSUN_} & \(\downarrow\) FPR95 & 54.57 & **6.02** & 33.17 & 34.03 & 27.52 & 10.36 & 25.55 & 9.65 & 3.63 \\ & \(\uparrow\) AUROC & 92.12 & **98.63** & 94.65 & 94.94 & 95.59 & 98.01 & 95.26 & 98.40 & 0.23 \\ \hline \multirow{2}{*}{_Textures_} & \(\downarrow\) FPR95 & 66.45 & 23.21 & 56.40 & 46.91 & 55.23 & 28.85 & 27.57 & **14.39** & \(8.82^{*}\) \\ & \(\uparrow\) AUROC & 88.50 & 92.91 & 86.21 & 89.69 & 89.37 & 94.87 & 94.71 & **97.19** & \(2.32^{*}\) \\ \hline \multirow{2}{*}{_Places_} & \(\downarrow\) FPR95 & 62.46 & 83.50 & 63.04 & 62.63 & 42.77 & 38.31 & 50.90 & **17.63** & \(20.68^{*}\) \\ & \(\uparrow\) AUROC & 88.64 & 83.50 & 86.57 & 87.31 & 91.02 & 93.04 & 89.14 & **95.36** & \(2.32^{*}\) \\ \hline \multirow{2}{*}{_Average_} & \(\downarrow\) FPR95 & 57.67 & 37.94 & 36.16 & 32.80 & 38.02 & 24.20 & 30.80 & **12.50** & \(3.84^{*}\) \\ & \(\uparrow\) AUROC & 90.90 & 89.29 & 92.30 & 93.52 & 93.05 & 95.90 & 94.15 & **97.47** & \(0.78^{*}\) \\ \cline{1-1} & \(\uparrow\) ID ACC & 94.21 & 94.21 & 94.21 & 93.96 & 94.21 & **94.38** & 94.21 & 93.91 & 0.30 \\ \hline \hline \end{tabular}
\end{table}
Table 2: OOD detection results on CIFAR-10. \(\downarrow\) **indicates lower is better, \(\uparrow\) means greater is better.** We highlight the best and second results using bold and underline. \(\triangle\) is the difference between SSOD and the best previous art, \({}^{*}\) indicates our improvements. All values are percentages.
**Imbalance issue between ID/OOD features.** During the training of the OOD head, we obtain much more background features since the objects only occupy a small part of the image. To promote training stability, we design three ways to tackle this issue, which are _Loss Weighting_ (LW), _Data Resampling_ (DR), and _Loss-Wise Balance_ (LWB). LW multiplies a balance factor on the loss generated by the background features, DR samples equivalent ID/OOD features within each image, and LWB calculates the cross entropy generated by the ID/OOD features separately and picks their mean value as the loss objective. CIFAR-10 (Krizhevsky et al., 2009) and Places (Zhou et al., 2018) are the ID/OOD data. Based on the ablations depicted in Table 5, SSOD employs LWB for data balancing.
## 6 Conclusion and discussion
This paper proposes a probabilistic framework that divides the OOD detection problem into two factors, _i.e._, ID and OOD factors. This provides a comprehensive overview of existing OOD methods and highlights the critical constraint of relying on pre-trained features. To address this limitation, we introduce an end-to-end scheme called SSOD which trains the OOD detection objective jointly with the ID classification. This approach leverages OOD supervision from the background information of ID images, eliminating the need for additional costs. Extensive experiments have validated that SSOD achieves state-of-the-art performance in detecting OOD data.
To the best of our knowledge, SSOD is the first method that generates natural OOD supervision to unlock the potential of the end-to-end paradigm. However, SSOD is based on the local property from convolutions, which are not applied for transformers built with cascaded attention and FFN layers, _e.g._, ViT. Thus, discovering a more general self-supervised OOD sampler for different network architectures is a valuable and necessary direction.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Model} & R-50 & D-121 & Reg & Mobile \\ \cline{2-5} & \(\downarrow\) FPR95 (\(\boldsymbol{\mathcal{X}}\)) & 73.99 & 68.75 & 71.66 & 73.27 \\ \cline{2-5} & \(\downarrow\) FPR95 (\(\boldsymbol{\mathcal{Y}}\)) & 57.24 & 51.39 & 50.48 & 54.37 \\ \cline{2-5} & \(\triangle\) & 16.75\({}^{*}\) & 17.36\({}^{*}\) & 21.18\({}^{*}\) & 18.90\({}^{*}\) \\ \hline \hline \end{tabular}
\end{table}
Table 6: Ablations of different model architectures. R-50, D-121, Reg, and Mobile indicate ResNet-50, DenseNet-121, RegNet, and MobileNet, respectively. \(\boldsymbol{\mathcal{X}}\) represents the vanilla counterpart, while \(\boldsymbol{\mathcal{Y}}\) is the SSOD version.
Figure 4: Confidence of OOD images assigned by SSOD. Successful cases: (a), (b), (c), and (d). These images are randomly sampled from several OOD dataset and assigned nominal ID confidence by the SSOD. Failure cases: (e) and (f). These images carry vital symbols of objects appearing in ImageNet, _e.g._, (e) is the _braided_ and (f) is _cobwebbed_, which are similar with the _knot_ and _spider_ in ImageNet.
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{3}{c}{_Textures_} \\ \cline{2-4} & \(\downarrow\) FPR95 & \(\uparrow\) AUROC \\ \hline MSP & 68.00 & 79.61 \\ MaDist & 55.80 & 85.01 \\ ODIN & 50.23 & 85.62 \\ GODIN & 77.85 & 73.27 \\ MOS & 60.43 & 81.23 \\ KNN & **11.56** & **97.18** \\ \hline SSOD & 50.02 & 86.11 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Failure cases on Textures. ImageNet (Russakovsky et al., 2015) is the training set and Textures (Cimpoi et al., 2014) is treated as the OOD data. No contrastive loss is used. The original copy of MOS (Huang and Li, 2021) tells no classification accuracy.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{\(\alpha\)} & 0.7 & 1.0 & 1.5 & 1.8 & 2.0 \\ \cline{2-5} & 18.33 & 18.24 & 17.63 & 17.01 & 16.39 \\ \(\uparrow\) AUROC & 94.62 & 94.97 & 95.76 & 95.82 & 96.09 \\ \(\uparrow\) ID ACC & 94.00 & 93.91 & 93.91 & 92.10 & 91.17 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablations on the hyper-parameter \(\alpha\). | OOD検出により、閉塞された画像セットで訓練されたモデルは、オープンワールドの未知データを見つけることができます。ただし、これまで多くの研究手法は、この研究分野における改善に貢献してきました。しかし、2つの重要な課題はまだ残っています。まず、開発された芸術のデザインを統一的に見ることによって、それぞれ異なるデザインを持つ芸術を見ることができるという視点が欠如しており、これにより、将来の研究への洞察を得るために重要な役割を果たしています。さらに、自然なOOD指導を十分に持つことで、IDとOODデータの境界をコンパクトにすることができ、明確なOODサンプルを収集することなくそれを行うことができます。これらの問題に対処するために、既存の方法を解釈するための一般的な確率的フレームワークとOODデータなしのモデルを提案します。これは、SSODです。SSODは、IDデータの局所的なプロパティに基づいて、自然なOOD信号を効率的に利用し、OOD検 |
2304.01357 | Excavation Problems in Elamite Mathematics | In this article, we study the problems found in the Susa Mathematical Texts
No.\,24 and No.\,25 (\textbf{SMT No.\,24} and \textbf{SMT No.\,25}) which
concern excavation projects such as canals and holes. We also examine certain
Elamite structures, such as the canal systems serving Susa and a reservoir at
the ziggurat of Chogha Zanbil, in whose construction geometry might well have
played an important role. | Nasser Heydari, Kazuo Muroi | 2023-02-09T17:58:09 | http://arxiv.org/abs/2304.01357v1 | # Excavation Problems in Elamite Mathematics
###### Abstract
In this article, we study the problems found in the Susa Mathematical Texts No. 24 and No. 25 (**SMT No. 24** and **SMT No. 25**) which concern excavation projects such as canals and holes. We also examine certain Elamite structures, such as the canal systems serving Susa and a reservoir at the ziggurat of Chogha Zanbil, in whose construction geometry might well have played an important role.
## 1 Introduction
**SMT No. 24** and **SMT No. 25** are two of the texts inscribed on 26 clay tablets excavated from Susa in southwest Iran by French archaeologists in 1933. The texts of all the Susa mathematical texts (**SMT**) along with their interpretations were first published in 1961 (see [1]).
**SMT No. 24** and **SMT No. 25** are on a one-column clay tablet1. Following Bruins in [1], we treat these two texts separately.
Footnote 1: The reader can see the new photos of this tablet on the website of the Louvre’s collection. Please see [https://collections.louvre.fr/en/ark:/53355/cl010186434](https://collections.louvre.fr/en/ark:/53355/cl010186434) for obverse and reverse.
**SMT No. 24** contains two problems. The first, an excavation problem, which leads to a complicated quadratic equation, is found on the obverse of the tablet. The second problem, concerning the digging of a canal, is presented on the reverse.
The text of **SMT No. 25**, also on the reverse of the tablet, is another belonging to the category of excavation problems. There is only one problem treated in this text, which concerns digging a canal.
Although the entire problems are unavailable because of damage to the tablet, considerable understanding of the mathematical calculations utilized in solving these problems can be derived from a careful analysis of the text that remains.
Excavation Problems in the SMT
In Babylonian mathematics, there are many problems which concern digging a hole, a cistern, a canal or a foundation for a building or a wall. Such problems are often referred as _excavation problems_. Among the Babylonian mathematical texts, there are several which address various quantities concerning canals, although from the mathematical point of view they are relatively simple. By way of examples of such texts, we mention **YBC 4666**, **YBC 7164**, **YBC 9874**, **VAT 7528**, and **BM 85196** (please also see [10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 112, 113, 114, 115, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 140, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 212, 223, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 310, 320, 331, 334, 34, 351, 352, 353, 354, 355, 356, 357, 358, 359, 401, 411, 42, 436, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 72, 73, 74, 75, 76, 77, 78, 79, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 109, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 123, 124, 125, 126, 127, 128, 129, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 151, 152, 153, 154, 156, 157, 158, 159, 161, 179, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 201, 212, 213, 214, 215, 216, 217, 218, 219, 222, 223, 217, 219, 223, 219, 224, 225, 216, 218, 219, 226, 227, 228, 229, 230, 231, 232, 234, 235, 236, 237, 238, 239, 240, 241, 243, 244, 245, 246, 247, 249, 250, 252, 254, 256, 257, 258, 259, 260, 261, 262, 263, 265, 266, 267, 269, 270, 272, 274, 275, 276, 277, 278, 279, 281, 282, 283, 285, 286, 287, 289, 290, 292, 293, 294, 295, 296, 297, 298, 299, 301, 31, 320, 331, 334, 351, 352, 353, 354, 356, 357, 358, 359, 401, 41, 42, 436, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 112, 109, 112, 113, 114, 115, 116, 117, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 151, 152, 153, 154, 156, 157, 159, 161, 179, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 209, 209, 210, 212, 223, 214, 215, 216, 217, 219, 220, 221, 223, 217, 228, 229, 230, 231, 233, 233, 234, 235, 236, 237, 238, 239, 240, 241, 258, 259, 261, 259, 270, 281, 293, 294, 295, 296, 297, 298, 299, 301, 301, 312, 301, 323, 302, 334, 351, 352, 353, 354, 356, 357, 358, 359, 401, 41, 42, 436, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 67, 68, 69, 70, 73, 74, 75, 76, 78, 79, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 109, 112, 109, 112, 113, 114, 115, 116, 117, 119, 120, 121, 131
(L27) 34,41,15 nigni 20:3,[13,21,3]3,45 _ta-mar_
(L28) _a-na_ 20:3,13,21,33,[45] 1,5,55,4,41,15 dah
(L29) 21:9,8,26,15 [_ta-mar_] _m[_i-_n_] \(a\) fb-si 35,37,30 fb-si
(L30) 34,41,15 _ta-ki-i[_[_ta-_ta_]-ka [_a-na_ 3]5,37,30 dah
(L31) 1,10:18,45 _ta-m[_ar_] _m[_i-na_ _a-na_ 1] 14:3,45
(L32) a-sa _sa-ar-ri_ gar _sa_ [1,10:1]8,45 _i-na-ad-di-na_
(L33) 5 gar 5 dagal an-ta 1/2 [5 _he-pe_ 2,30 _ta-mar_ 2,30 _a-na_ ]
(L34) 30 dirig dah 3 _ta-mar_ 3 dagal ki-ta [igi-12 _sa_ dagal an-ta]
(L35) ugu dagal ki-ta _i-te-ru le-q[_\(\acute{e}\)_] 10 _ta-mar_ [30 \(u\) 10 ul-gar]
(L36) 40 _a-na_ 12 _su-up-li_ _i-st_ 8 _ta-mar_ [5 dagal an-ta]
(L37) \(u\) 3 dagal ki-ta ul-gar 8 _ta-mar_ [1/2 8 _he-pe_]
(L38) 4 _ta-mar_ 4 _a-na_ 8 _su-up-li_ _i-st_-[_ma_]
(L39) 32 _ta-mar_igi-32 _pu-tu-ur_ 1,5[2,30 _ta-mar_ ]
(L40) 1,52,30 _a-na_ 24 sahar _i-\(\acute{}\)_[_i_ 45 _ta-mar_ 45 u\(\acute{}\)s]
Reverse, Lines 1-24
(L1) za-e(?) \(\cdots\) [\(\cdots\) \(\cdots\) \(\cdots\) ]
(L2) _u-sa-pi-i[_] _i_(?)-[_na_(?) _k_]_a-la-ak-[_ki-im_ \(\cdots\) \(\cdots\) ]
(L3) 2-kam _ta-a[_[_d_]-di-in_ 2_-tu_ [\(\cdots\) \(\cdots\) \(\cdots\) ]
(L4) a-sa _ka-la-ak-ki_ gal ul-[gar \(\cdots\) \(\cdots\) \(\cdots\) ]
(L5) _a-na_ tun _sa_ _a-ta-ap_ pa-[_n_]_a-_n_[_im_] da[h\(\cdots\) ]
(L6) _a-na_ tun _a_s-[_lu-u_]_t_ (?) ul-gar sahar \(\cdots\) [\(\cdots\) \(\cdots\) 1,15]
(L7) za-e 1,15 ul-gar _a-na_ 13 _sa_-[_la-\(\acute{}\)_s-ea-ra_-_t_] _i-st-ma_ 16,[15]
(L8) 10 _sa ka-la-ak-ku_ ugu _ka-la-a[_kki _i-t_]_e-ru nigin 1,40 _ta-mar_
(L9) 1,40 _i-na_ 16,15 zi 16,[13,20] _ta-mar_ igi-10 dirig _pu-tu-ir_
(L10) 6 _ta-mar_ igi-12 _su-up-li pu-tu-ur_ 5 _ta-mar_ 5 _a-na_ 6 _i-st_
(L11) 30 _ta-mar_ 30 _ta-lu-ku_ 30 _ta-lu-ka a-na_ 16,13,20 _i-si-ma_
(L12) 8,6,40 _ta-mar_ 10 [dir]ig nigni 1,40 _ta-mar_ 1,40 _a-na_ 13 _sa-la-\(\acute{}\)_s-ea-ra-ti_
(L13) _i-st-ma_ 21,[40 _ta-mar_] 21,40 _i-na_ 8,6,40 zi
(L14) 7,45 _ta-[_mar re-is-k_] _la-ki-ti_ 3[0 _ta-lu-k_]_a_
(L15) _a-na_ 13 [_sa-la-\(\acute{}\)_s-ea-ru-ti_] _i-si_ 6,30 _t_[_a-mar_ 30 _t_]_a-lu-ka_
(L16) _a-na ka-a-aia-ma_[_ni_] 2 tab-ba 1 _ta-mar_ 1 _a-na_ 6,30 dah
(L17) [7],30 _ta-mar_ 1[3 _sa-l_]_a-as-\(\acute{}\)_s-\(\acute{}\)_e-ra-ti_ _a-na_ 3-su_ _a-na_ ka-aiaia-ma_-ni_
(L18) _a-li-ik-ma_ 3[9] _la-mar_ 7,30 _a-na_ 39 dah 46,30 _ta-mar_
(L19) _mi-na_ _a-na_ 46,30 gar _sa_ 7,45 _sa_ _r_[_e-is_]_-ka _u-ki-il-lu_
(L20) [_i-n_]_a-ad-di-na_ [10] gar _re-is-ka_ _li-ki-il_ 1/2 10 dirig _he-pe_
(L21) [5 _ta_]_-mar_ [5(?) gar(?)] 5 nigni 25 _ta-mar_ 25 _a-na_ 10
(L22) [_sa_ re-is-ka _u-ki-il-lu_] dah 10,25 _ta-mar_ mi-na_ ib-si
(L23) [25 _ib-si_ 25(?) gar(?)] 5 _a-na_ 25 _is-te-en_ dah 30 _ta-mar_
(L24) [_i-na_ 25 2-kam zi 20 _t_]_a-mar_ 30 gal 20 tur
**Translation**
Obverse, Lines 1-40
(L1) The canal that I constructed in reed beds and woods, \(\cdots\)\(\cdots\).
(L2) \(\cdots\) 0;30 of the excess, the lower breadth. One third of \(\cdots\),
(L3) that I constructed, the depth. The volume 24,0(**volume-sar**) that I constructed \(\cdots\).
(L4) What are \(\cdots\) and the depth? You, 1 breadth that you do not know \(\cdots\),
(L5) you see 0;30. Put down 0;30 of the excess. 1 regular number \(\cdots\),
(L6) 0;30 of the excess. Take one third of 0;30, (and) you see 0;10. \(\cdots\).
(L7) Multiply (it by 12), and you see 2. Put down 2 as the width. 1, the upper breadth \(\cdots\),
(L8) you see 1,30. Halve 1,30, (and) you see 45. 45 is the length. \(\cdots\).
(L9) Return. Take one third of 0;30 of the excess, (and) you see 0;10. \(\cdots\).
(L10) Multiply (it by 12), and you see 2. \(\cdots\),
(L11) you see 15. 15, \(\cdots\). Put down the length. \(\cdots\).
(L12) Let it hold your head. \(\cdots\). Make the reciprocal of 45 of the length, (and 0;1,20).
(L13) Multiply (it) by \(\cdots\) of the area, and you see 9;22,30. Multiply \(\cdots\).
(L14) \(\cdots\) 24,0(?), and 0;2 is the ratio (of the width to the length).
(L15) \(\cdots\) \(\cdots\) width(?), subtract 0;2(?), and the area.
(L16) \(\cdots\) \(\cdots\), and the length. 15, the one which is to be added to the length.
(L17) \(\cdots\) \(\cdots\). Multiply 0;30 by 9;22,30, and
(L18-19) you see 4;41,15. Return. Multiply 45 of the length by 0;2 of (the ratio of) the width (to the length), and you see 1;30. Multiply (it) by 9;22,30, and you see 14;3,45.
(L20-21) 14;3,45 is the false area. Multiply 14;3,45 by 4;41,15 of the area, and you see 1,5;55,4,41,15. Let it hold your head.
(L22) Multiply 15, the one which is to be added to the length, by 0;2 of (the ratio of) the width (to the length), and you see 0;30.
(L23) Multiply 0;30 by 3, and you see 1;30. Subtract 0;30 from 1;30, (and)
(L24) you see 1. Multiply 1 by 9;22,30, and you see 9;22,30.
(L25) Since \(<1,0>\) as the length is said to you, add 1,0, the factor, to 9;22,30, (and)
(L26) you see 1,9;22,30. Halve 1,9;22,30, (and) you see 34;41,15.
(L27) Square 34;41,15, (and) you see 20,3;13,21,33,45.
(L28) Add 1,5;55,4,41,15 to 20,3;13,21,33,45, (and)
(L29) you see 21,9;8,26,15. What is the square root? 35;37,30 is the square root.
(L30) Add 34;41,15 (which was used in) your completing the square to 35;37,30, (and)
(L31-32) you see 1,10;18,45. What should I put down to 14;3,45, the false area, which will give me 1,10;18,45?
(L33) Put down 5. 5 is the upper breadth. Halve 5, (and) you see 2;30.
(L34) Add 2;30 to 0;30 of the excess, (and) you see 3. 3 is the lower breadth.
* (L35) Take 1/12 of the amount by which the upper breadth exceeded the lower breadth, (and) you see 0;10. Add 0;30 and 0;10 together, (and the result is 0;40).
* (L36) Multiply 0;40 by 12 of the (constant of the) depth, (and) you see 8.
* (L37) Add together 5 of the upper breadth and 3 of the lower breadth, (and) you see 8. Halve 8, (and)
* (L38) you see 4. Multiply 4 by 8, of the depth, and
* (L39) you see 32. Make the reciprocal of 32, (and) you see 0;1,52,30.
* (L40) Multiply 0;1,52,30 by 24,0 of the volume, (and) you see 45. 45 is the length.
Reverse, Lines 1-24
* (L1) You(?),.........
* (L2)... I excavated. In(?) the hole......
* (L3) you gave the second...... A second time......
* (L4) I added...... and the area of the large hole together,......
* (L5) I added...... to the depth of the former canal,......
* (L6) I cut off (?)... for the small. The sum of the volume and... is 1;15.
* (L7) You, multiply 1;15 of the sum by 13 of one thirteenth, and (you see) 16;15.
* (L8) Square 0;10 of the amount by which (the length of) the (large) hole exceeded (the length of) the (small) hole, (and) you see 0;1,40.
* (L9) Subtract 0;1,40 from 16;15, (and) you see 16;13,20. Make the reciprocal of 0;10 of the excess,
* (L10) (and) you see 6. Make the reciprocal of 12 of the depth, (and) you see 0;5. Multiply 0;5 by 6,
* (L11) (and) you see 0;30. 0;30 is the product. Multiply 0;30 of the product by 16;13,20, and
* (L12) you see 8;6,40. Square 0;10 of the excess, (and) you see 0:1,40. Multiply 0;1,40 by 13 of one thirteenth,
* (L13) and you see 0;21,40. Subtract 0;21,40 from 8;6,40, (and)
* (L14) you see 7;45. Let it hold your head.
* (L15) Multiply 0;30 of the product by 13 of one thirteenth, (and) you see 6;30.
* (L16) Multiply 0;30 of the product by regular (number) 2, (and) you see 1. Add 1 to 6;30, (and)
* (L17) you see 7;30. Multiply 13 of the one thirteenth by 3, by regular (number three),
* (L18) and you see 39. Add 7;30 to 39, (and) you see 46;30.
* (L19) What should I put to 46;30 which gives me 7;45 that held your head?
* (L20) Put down 0;10. Let it hold your head. Halve 0;10 of the excess, (and)
* (L21) you see 0;5. Put down 0;5(?). Square 0;5, (and) you see 0;0,25.
* (L22) Add 0;0,25 to 0;10 that held your head, (and) you see 0;10,25. What is the square root?
* (L23) 0;25 is the square root. Put down 0;25(?). On the one hand add 0;5 to 0;25, (and) you see 0;30.
* (L24) On the other hand subtract (0;5) from 0;25, (and) you see 0;20. 0;30 is the large, (and) 0;20 is the small.
### Mathematical Calculations
The two problems in this text deal with constructing canals and computing their dimensions. The general shape of a canal is a prism with trapezoidal bases as shown in Figure 1 along with its reserved water. Denote the lower breadth (width), the upper breadth (width), the length and the depth (height) of the canal by \(v\), \(u\), \(x\) and \(z\) respectively. Also denote the height of the reserved water by \(z^{\prime}\).
The area \(S\) of the trapezoidal base and the volume \(V\) of the trapezoidal canal are obtained by
\[S=\frac{1}{2}z(u+v), \tag{1}\]
and
\[V=xS=\frac{1}{2}xz(u+v). \tag{2}\]
Although formula (2) gives us the whole capacity of a canal, it is possible to compute the capacity of its reserved water \(V^{\prime}\) by using the constant of a canal given in **SMT No. 3**, line 33. In this line we read:
48 igi-gub _sa_ pa\({}_{5}\)-sig
"0;48 is the constant of a small canal"
This suggest that the ratio of the depth of canal to that of its reserved water is \(0;48=\frac{4}{5}\). In other words, we have
\[\frac{z^{\prime}}{z}=\frac{4}{5}.\]
Thus, \(z^{\prime}=\frac{4}{5}z\) and the volume of the reserved water should be
\[V^{\prime}=\frac{4}{5}V.\]
So it follows from (2) that
\[V^{\prime}=\frac{2}{5}xz(u+v). \tag{3}\]
Figure 1: The general shape of a canal and its dimensions
#### First Problem
Due to damage to the tablet, we are unable to establish with certainty the meanings of the technical expressions and calculations found in lines 1-25. We enumerate some of these ambiguities below:
Line 18: "0;2 is the ratio \(\cdots\)"
If we denote the width by \(y\), this probably means \(\frac{y}{x}=0;2\). In this case, according to lines 22-23, the value of \(y\) is obtained by
\[y=(0;2)x=(0;2)\times 45=1;30\]
assuming that \(x=45\). But this only adds to the confusion because similar terminologies (such as the upper breadth and the lower breadth) also occur in the text.
Line 22: "15, the one which is to be added to the length"
If we assume that \(x=45\), this probably concerns the calculation
\[x+15=45+15=1,0\]
whose result is mentioned in line 25: "Since 1,0 as the length is said to you".
Lines 18-19:"\((0;30)\times(9;22,30)=4;41,45\)"
The result of this multiplication is called "the area" in lines 20-21.
Lines 20-21: "\((1;30)\times(9;22,30)=14;3,45\)"
The result of this multiplication is called "the false area" in lines 20-21.
Lines 22-24: "\(15\times(0;2)\times 3-0;30=(0;30)\times 3-0;30=1;30-0;30=1\)"
The number 0;30 is called "the one which is subtracted from the width" in line 23.
Line 24: "\(1\times(9;22,30)=9;22,30\)"
We have been unable to reach a conclusion concerning this multiplication.
Line 25: "\(1,0+9;22,30=1,9;22,30\)"
We have also ben unable to reach a conclusion concerning this addition.
Figure 2: Cross-section of a trapezoidal canal
In spite of these uncertainties, the scribe of this text is able to compute both the dimensions of a canal in general and also solve a quadratic equation to find the upper breadth of the canal. We have shown the trapezoidal cross-section of a general trapezoidal canal in Figure 2.
Note that the calculations in lines 34-40 show that the scribe is assuming the following relations between the dimensions of the canal:
\[\begin{cases}v=\dfrac{u}{2}+0;30\\ z=12\left(0;30+\dfrac{1}{12}(u-v)\right).\end{cases} \tag{4}\]
By looking carefully at the scribe's calculations in lines 26-33, it can be seen that he has solved the following quadratic equation:
\[(14;3,45)u^{2}-(1,9;22,30)u=4;41,15. \tag{5}\]
This equation (5) has been solved by the usual method of completing the squares-a standard method used by Babylonian and Elamite scribes to solve quadratic equations. This method was called _Takiltum_ in Babylonian texts (see [13, 14], for a discussion on this topic).
Now, let us use this method and solve the quadratic equation (5) as follows:
\[(14;3,45)u^{2}-(1,9;22,30)u=4;41,15\] \[\implies (14;3,45)\times(14;3,45)u^{2}-(14;3,45)\times(1,9;22,30)u=(14;3,45 )\times(4;41,15)\] \[\implies (14;3,45)^{2}u^{2}-(1,9;22,30)\times\Big{(}(14;3,45)u\Big{)}=1,5 ;55,4,41,15\] \[\implies \Big{(}(14;3,45)u\Big{)}^{2}-2\times(34;41,15)\times\Big{(}(14;3,4 5)u\Big{)}=1,5;55,4,41,15\] \[\implies \Big{(}(14;3,45)u\Big{)}^{2}-2\times(34;41,15)\times\Big{(}(14;3,4 5)u\Big{)}+(34;41,15)^{2}\] \[=1,5;55,4,41,15+(34;41,15)^{2}\] \[\implies \Big{(}(14;3,45)u-34;41,15\Big{)}^{2}=1,5;55,4,41,15+20,3;13,21,3,45\] \[\implies \Big{(}(14;3,45)u-34;41,15\Big{)}^{2}=21,9;8,26,15\] \[\implies (14;3,45)u-34;41,15=\sqrt{21,9;8,26,15}\] \[\implies (14;3,45)u-34;41,15=\sqrt{(35;37,30)^{2}}\] \[\implies (14;3,45)u-34;41,15=35;37,30\] \[\implies (14;3,45)u=35;37,30+34;41,15\] \[\implies (14;3,45)u=1,10;18,45\] \[\implies u=\dfrac{1}{(14;3,45)}\times(1,10;18,45)\] \[\implies u=(0;4,16)\times(1,10;18,45)\] \[\implies u=5.\]
\[u=5. \tag{6}\]
Now, according to lines 33-36, we can find the values of \(v\) and \(z\). On the one hand, by (4) and (6), we have
\[v =\frac{u}{2}+0;30\] \[=\frac{5}{2}+0;30\] \[=2;30+0;30\] \[=3.\]
Hence,
\[v=3. \tag{7}\]
On the other hand, it follows from (4) and (7) that
\[z =12\left(0;30+\frac{1}{12}(u-v)\right)\] \[=12\left(0;30+\frac{1}{12}(5-3)\right)\] \[=12\left(0;30+\frac{2}{12}\right)\] \[=12\times(0;30+0;10)\] \[=12\times(0;40)\]
which implies that
\[z=8. \tag{8}\]
Finally, a verification process seems to begin at line 37. First, the area of the trapezoidal cross-section \(S\) is obtained by using (1), (6), (7) and (8) as follows:
\[S=\frac{z(u+v)}{2}=\frac{8(5+3)}{2}=4\times 8=32. \tag{9}\]
Then, the length \(x\) of the canal is obtained by using the known volume \(V=24,0\) (mentioned in line 3). In fact, it follows from (2) and (9) that
\[x =\frac{V}{S}\] \[=\frac{24,0}{32}\] \[=\frac{1}{32}\times(24,0)\] \[=(0;1,52,30)\times(24,0)\] \[=45.\]
#### Second Problem
In the lines regarding the second problem, we can recognize several typical expressions found in excavation problems as follows:
(1) Reverse, line 2: "I excavated".
(2) Reverse, line 4: "the area of the large hole".
(3) Reverse, line 5: "the depth of the former canal".
(4) Reverse, line 6: "I cut off \(\cdots\) for the small".
(5) Reverse, line 8: "the amount by which (the length of) the (large) hole exceeded (the length of) the (small) hole".
(6) Reverse, line 10: "the reciprocal of a number of 12 of the depth".
Although we cannot restore the statement of this problem either, it seems that a canal has been enlarged, that is, the bottom of a canal has been deepened. Let \(x\), \(y\) and \(z\) denote the length, the width and the depth of a canal whose cross-section is rectangular (see Figure **3**).
Judging from the calculations performed in the text, the equations dealt with in this problem are:
\[\begin{cases}x-y=0;10\\ z=12(x-y)\\ z(x^{2}+y^{2})+xy(z+1)+\frac{1}{13}\Big{(}x^{2}+y^{2}\Big{)}=1;15.\end{cases} \tag{10}\]
Before discussing the geometrical significance of these equations and the nature of the hole (_kalakkum_), we analyze the solution given in lines 7-24.
According to line 7, we can multiply both sides of the third equation in (10) by 13. Since \(13\times(1;15)=16;15\), we get
\[13z(x^{2}+y^{2})+13xy(z+1)+x^{2}+y^{2}=16;15. \tag{11}\]
Figure 3: A canal with rectangular cross-section
At this point, the scribe appears to have used the algebraic identity
\[x^{2}+y^{2}=(x-y)^{2}+2xy. \tag{12}\]
According to lines 8-9, since \(x-y=0;10\), we can use (11) and (12) to write
\[13z(x^{2}+y^{2})+13xy(z+1)+x^{2}+y^{2}=16;15\] \[\implies 13z\Big{(}(x-y)^{2}+2xy\Big{)}+13xy(z+1)+(x-y)^{2}+2xy=16;15\] \[\implies 13z(x-y)^{2}+26xyz+13xyz+13xy+(x-y)^{2}+2xy=16;15\] \[\implies 13z(x-y)^{2}+(3\times 13)xyz+(13+2)xy=16;15-(x-y)^{2}\] \[\implies 13z(x-y)^{2}+(3\times 13)xyz+(13+2)xy=16;15-(0;10)^{2}\] \[\implies 13z(x-y)^{2}+(3\times 13)xyz+(13+2)xy=16;15-0;1,40\]
thus
\[13z(x-y)^{2}+(3\times 13)xyz+(13+2)xy=16;13,20. \tag{13}\]
(Following the scribe's calculations, we did not substitute \(x-y=0;10\) in term \(13z(x-y)^{2}\) to simplify (13) more.)
In lines 10-11, the scribe calculates the reciprocal of \(z\). It follows from the first two equations in (10) that
\[z=12(x-y)\] \[\implies \frac{1}{z}=\frac{1}{12}\times\frac{1}{x-y}\] \[\implies \frac{1}{z}=(0;5)\times\frac{1}{(0;10)}\]
which gives us
\[\frac{1}{z}=0;30. \tag{14}\]
Next, according to lines 11-20, we multiply both sides of (13) by \(1/z\) and then use (13)
to find the value of \(xy\) as follows:
\[13z(x-y)^{2}+(3\times 13)xyz+(13+2)xy=16;13,20\] \[\implies \frac{1}{z}\times\Big{(}13z(x-y)^{2}+(3\times 13)xyz+(13+2)xy\Big{)}= \frac{1}{z}\times(16;13,20)\] \[\implies 13(x-y)^{2}+(3\times 13)xy+(13+2)\Big{(}\frac{1}{z}\times xy \Big{)}=\frac{1}{z}\times(16;13,20)\] \[\implies 13(x-y)^{2}+(3\times 13)xy+\Big{(}(13+2)\times(0;30)\,\Big{)}xy= (0;30)\times(16;13,20)\] \[\implies 13\times(0;10)^{2}+\Big{(}(3\times 13)+13\times(0;30)+2 \times(0;30)\,\Big{)}xy=8;6,40\] \[\implies 13\times(0;1,40)+(39+6;30+1)xy=8;6,40\] \[\implies 0;21,40+(46;30)xy=8;6,40\] \[\implies (46;30)xy=8;6,40-0;21,40\] \[\implies (46;30)xy=7;45\] \[\implies xy=\frac{1}{(46;30)}\times(7;45)\] \[\implies xy=\frac{1}{6\times(7;45)}\times(7;45)\]
which implies that
\[xy=0;10. \tag{15}\]
The last part of the text proceeds with the common Babylonian method (completing the square). According to lines 20-21, by (10) and (15), we can write:
\[\frac{x+y}{2} =\sqrt{\left(\frac{x-y}{2}\right)^{2}+xy}\] \[=\sqrt{\left(\frac{0;10}{2}\right)^{2}+0;10}\] \[=\sqrt{\left(0;5\right)^{2}+0;10}\] \[=\sqrt{0;0,25+0;10}\] \[=\sqrt{0;10,25}\] \[=\sqrt{(0;25)^{2}}\]
So
\[\frac{x+y}{2}=0;25. \tag{16}\]
Now, we are in the position to compute the values of \(x\) and \(y\). According to lines 23-24,
it follows from (10) and (16) that
\[x =\frac{x+y}{2}+\frac{x-y}{2}\] \[=0;25+0;5\] \[=0;30\]
and
\[y =\frac{x+y}{2}-\frac{x-y}{2}\] \[=0;25-0;5\] \[=0;20.\]
Therefore, the required values of the length and the width are
\[x=0;30\quad\text{and}\quad y=0;20.\]
Although we can now understand the mathematical meaning of the second problem, difficult questions still remain:
What are really \(x\) and \(y\) in this problem?; and
What is the relation between "a canal" (_atappum_) and "a hole" (_kalaktum_)?
To answer these questions, we may need to consider a hypothetical situation where two canals intersect as shown in Figure 4. In this situation, an old canal of depth \(z\) and width \(x\) is joined by a new canal of depth \(z\) and width \(y\) at right angles. The intersection between the old and new canals is a rectangle with sides \(x\) and \(y\) and deepened by \(1\) kus, probably to allow the deposit of silt2.
Footnote 2: In modern Japan, people sometimes see the same device for irrigation canals.
The top view of this intersection is pictured with related dimensions in Figure 5. The rectangle in the east-west direction is the old canal and the one in the north-south direction is the new canal. It is clear from the figure that the junction of the two canals is a cube of length \(x\), width \(y\) and depth \(1+z\). According to
Figure 4: Intersection of two canals
holes adjoining to this intersection which we have named the large hole and the small hole. Note that the large hole (_kalakki_ gal) has dimensions \(x\), \(x\), \(z\) and the small hole (_kalakki_ tur) has dimensions \(y\), \(y\), \(z\). The unknown quantities asked for in this problem are the length \(x\) of the large hole and the length \(y\) of the small hole. One might expect here that \(x\) is called, for example, us_kalakki_ gal "the length of the large hole" instead of _kalakki_ gal "(the length) of the large hole". Since \(x\) is originally the width of the old canal, the Susa scribe might have omitted us "the length" in order to avoid confusion. In fact, neither us "the length" nor sag "the width" occurs in the text at all.
Now, let us return to the third equation in (10). We interpret the first and the second terms of the left-hand side, i.e., \(z(x^{2}+y^{2})=zx^{2}+zy^{2}\) and \(xy(z+1)\), as the sum of the volumes of the large hole, the small hole and the junction of the two canals respectively. However, as to the third term \(\frac{1}{13}(x^{2}+y^{2})\), we are of the view that it does not have any geometrical meaning and is added to these volumes to make the equation more complicated. There are examples of similar equations in the so-called _series texts_, that is, the ancient drill books in mathematics. For example, the following system of equations is given in the text of **VAT 75373**:
Footnote 3: Tablet **VAT 7537** is a Babylonian mathematical text in the Berlin Museum which originally was published by Neugebauer in [18]. For more information about this tablet and its text, see [19, 20].
\[\begin{cases}xy=10,0\\ x^{2}+\frac{1}{11}\bigg{(}2\left(\frac{1}{7}\Big{(}(2y+3(x-y)\Big{)}\right)^{2}+ x^{2}\bigg{)}=16,40.\end{cases}\]
### SMT No. 25
As mentioned earlier, the problem in **SMT No. 25** concerns the dimensions of a canal.
Figure 5: Top view of two intersecting canals and their junction
**Transliteration**
Reverse, Lines 25-34
(L25) [\(\cdots\)\(\cdots\)\(\cdots\)\(me\)]-\(e\)_sa_giis-g[i\(\cdots\)] 2 _sussar_ (SARxDIS) _sa_pa\({}_{5}\) GIG(?)
(L26) [\(\cdots\)\(\cdots\)] \(\cdots\) [\(\cdots\)] u s 5-ta-am _sa_6 _s_[_a_ sar] \(\cdots\)_-ma_
(L27) i[gi-5] _pu-tu-ur_12 _ta-mar_1_2 _a-na_6 _sa_ sar _i-si-ma_
(L28) 1,12 _ta-mar_1 sar \(u\)12 _su-si mu-tu a-na_40 erin-hi-a [gar]
(L29) igi-40 erin _pu-tu-ur_1,30 _ta-mar_1,30 _a-na_1,12 _i-st-ma_
(L30) 1,48 _ta-mar mu-u Sa_ erin 1-k[am us] 1 nindan 30 dagal
(L31) [1],48 _mu-u_su-up-lu _mi-nu_igi-48 igi-guib pa\({}_{5}\)_pu-tu-\([\)ur_]
(L32) [1,15 _t_]_a-mar_1,15 _a-na_1,48 _me-e Sa_ [erin 1-kam _i-si-ma_]
(L33) [2,15 _t_]_a-mar_igi-30 dagal _pu-tu-ur_2 _ta-mar_
(L34) [2,15 _a-n_]_a_2 _i-st-ma_4,30 _ta-mar_4,30 _su-up-lu_
**Translation**
Reverse, Lines 25-34
(L25) \(\cdots\)\(\cdots\) water of reed bed \(\cdots\). 2,0 saros (of water) of a canal. \(\cdots\).
(L26) \(\cdots\)\(\cdots\) the length 5 (nindan) each that of 6 saros \(\cdots\)\(\cdots\).
(L27) Make the reciprocal of 5, (and) you see 0;12. Multiply 0;12 by 6 saros, and
(L28) you see 1,12,0. 1 saros and 12 sixties is the (volume of) water. Put (it) down for 40,0 workers.
(L29) Make the reciprocal of 40,0 of the workers, (and) you see 0;0,1,30. Multiply 0;0,1,30 by 1,12,0, and
(L30) you see 1;48. (This is the volume of) the water (per 1 nindan in length) of the first(sic) worker. (If) the length is 1 nindan, the breadth is 0;30 (nindan), (and)
(L31) the (volume of) water is 1;48, what is the depth? Make the reciprocal of 0;48, the constant of a canal, (and)
(L32) you see 1;15. Multiply 1;15 by 1;48 of the water of a worker, and
(L33) you see 2;15. Make the reciprocal of 0;30 of the breadth, (and) you see 2.
(L34) Multiply 2;15 by 2, and you see 4;30. 4;30 (kus) is the depth.
**Mathematical Calculations**
The statement of the problem is almost entirely broken, and we have been unable to locate a similar problem to assist us in restoring it. However, the text calculates the depth of a canal using several conditions which may well have been provided in the statement. Judging from the calculations performed in the text, the missing conditions might have been as follows:
**Problem:** Two thousand four hundred workers (40,0 erin-hi-a) built a canal whose width is 0;30 nindan, and whose reserved water is 6 sar4 (that is, 6,0,0 volume-sar).
The workers were each assigned to dig a part of the canal 5 **nindan** in length. What is the depth of the canal?
Consider a canal of length \(x\), width \(y\), and depth \(z\) and denote the depth of its reserved water by \(z^{\prime}\) as is shown in Figure 6. As we said before, we must have
\[\frac{z^{\prime}}{z}=0;48. \tag{17}\]
Let \(V\) be the volume of the canal, \(V^{\prime}\) the volume of its reserved water, \(S\) the area of its cross-section and \(S^{\prime}\) the area of a part of the cross-section submerged in water. It is clear from the figure that
\[\begin{cases}V=xyz\\ V^{\prime}=xyz^{\prime}\\ S=yz\\ S^{\prime}=yz^{\prime}.\end{cases} \tag{18}\]
Note that it follows from (17) and (18) that
\[\frac{V^{\prime}}{V}=\frac{S^{\prime}}{S}=0;48. \tag{19}\]
Figure 6: A canal and its reserved water
Figure 7: The cross-section of a canal and the level of its reserved water
In Figure 7, we have shown the cross-section of the canal and its reserved water. The height of canal is \(z^{\prime}\) and that of reserved water is \(z\).
In lines 26-30, the scribe has calculated the volume of the water per 1 **nindan** and per one worker, say \(V_{1}^{\prime}\). To do so, he first obtains the the volume \(V_{0}^{\prime}\) of water per 1 **nindan** and all workers as follows:
\[V_{0}^{\prime} =\frac{V^{\prime}}{5}\] \[=\frac{1}{5}\times(6\ \mbox{\sout{s}ar})\] \[=(0;12)\times(6,0,0)\] \[=1,12,0\] \[=1\ \mbox{\sout{s}ar}\ 12\ \mbox{\sout{s}isi}.\]
(Note that 1 **sisi** is equal to \(1,0=60\).) Then, he divides this volume \(V_{0}^{\prime}=1,12,0\) by the number of all workers to find \(V_{1}^{\prime}\), which is also called "the water of the first worker" in line 30:
\[V_{1}^{\prime} =\frac{V_{0}^{\prime}}{(40,0)}\] \[=\frac{1}{(40,0)}\times(1,12,0)\] \[=(0;0,1,30)\times(1,12,0)\] \[=1;48.\]
Note that the value of \(V_{1}^{\prime}=1;48\) is the volume of the reserved water of the part of canal with length 1. That is, \(V_{1}^{\prime}\) is obtained by assuming \(x=1\) in the second equation of (18):
\[V_{1}^{\prime}=1yz^{\prime}=yz^{\prime}=S^{\prime}.\]
This means \(V_{1}^{\prime}\) is equal to the area of the cross-section of the lower part of the canal submerged in water:
\[S^{\prime}=1;48. \tag{20}\]
From (18) and (20), (according to lines 31-33) we can obtain \(S\):
\[S =\frac{S^{\prime}}{(0;48)}\] \[=\frac{1}{(0;48)}\times(1;48)\] \[=(1;15)\times(1;48)\]
which implies that
\[S=2;15. \tag{21}\]
Since by the assumption \(y=0;30\), we can (according to lines 33-34) use (18) and (21) to find the depth of the canal, i.e., \(z\), as follows:
\[z=\frac{S}{y}=\frac{1}{(0;30)}\times(2;15)=2\times(2;15)=4;30.\]
Therefore the depth of the canal is
\[z=4;30. \tag{22}\]
Note that the depth of water can be easily computed by using (17) and (22) as follows:
\[z^{\prime}=(0;48)\times(4;30)=3;36.\]
## 3 Applications of Mathematics in Ancient Elam
In this section, we consider the possible practical applications of mathematics both in construction projects and in Elamite architecture. The reader will note that the mathematical skills demonstrated in **SMT No. 24** and **SMT No. 25** may well have been of considerable assistance to those charged with the construction of Elamite infrastructure and other substantial buildings at the behest of the Elamite rulers of the time.
### Canals
As for any ancient civilization, water played a decisive role in where the people of ancient Elam decided to settle. The Elamite people were among the very first farmers in the Ancient Near East (see [1]), utilizing the nearby rivers to irrigate their lands and farms. For example, the ancient capital city of Susa was founded in a region watered by at least two rivers5 : the Karkheh and the Dez (see Figure 8).
Footnote 5: Professor Daniel Potts has made a strong case that there was only one river at Susa and that the modern Shavur river was the ancient course of the Karkheh river. See [12] for more details.
Figure 8: A satellite view of modern Susa (Google Map)
Although the course of the Karkheh in ancient Elam has been the subject of considerable scholarly debate, it has been suggested that since the Karkheh river is at a higher level than the fields between the Karkheh and the Dez, the people of Susa may have used the Karkheh's water to irrigate the land around the city or save it for future purposes (probably for drinking or making mud bricks). According to some research (see [11, 12]), there were at least four agricultural sections near ancient Susa which contained more than 40 small irrigation canals. Based on the analysis of other legal, economic and administrative clay tablets excavated from Susa, scholars have been able to determine the names and locations of many of these canals in each of these agricultural sections. These tablets date to the Sukkalmah Dynasty 1900-1500 BC, as do the **SMT**. We have shown three of these sections and the number of canals they contained in Figure 9. The location of the fourth section, described as being "on the other bank of the _Zamun_", cannot be established as it is not known which river was at that time known as the _Zamun_.
In addition to this network of irrigation canals around ancient Susa, there was also the ancient Harmushi canal system, which consisted of a group of connected canals watering the vast area of lands between the Karkheh and the Dez river. Archaeological evidence show that some branches of this network date to the Middle Elamite period (see [1, 21, 22]).
The main branch of this canal system was called the _Harmushi_ and connected to the Karkheh river at the point called _Pay-e Pol_ (literally, "at the foot of the bridge"), where there are still the ruins of an ancient dam which regulated the river into different branches. The length of this canal is thought to have been approximately \(50km\) in which case it could have irrigated nearly \(200km^{2}\) of the land between the Karkheh and the Dez rivers. Figure 10 shows a hypothetical course of the Harmushi canal network which has been adopted from Adams [1, Fig. 4]. The Harmushi system had several subbranches which is thought to provide water for the livestock of nomadic tribes. Many western travelers and scholars, who visited Kluzistan during last two centuries, mentioned the Harmushi canal in their reports and asserted that it was the
Figure 9: Possible distribution of irrigation canals near Susa (adapted from [11])
main source for irrigating a large area around Susa (see [14, 15, 16, 17]). Although this ancient canal system had been irrigating the Susa area for more than 3000 years, it was finally replaced by the modern canal network after the construction of a concrete dam built on the Dez river in 1963.
To understand the historical significance of this canal and its importance to the local population, even though there is almost no physical trace of the original system, the name of Harmushi still lives on in the surname of many people whose ancestors lived alongside it for centuries.6
Footnote 6: The complete name of the first author used to be “Nasser Heydari Harmushi”.
In addition to canals, there are archaeological sites near modern Susa containing the remains of other water structures one of which is located in the Elamite holy city of Chogha Zanbil7. There is a hydraulic complex consisting of canals and a water reservoir of length \(10.70m\), width \(7.25m\) and depth \(4.35m\) whose capacity is about \(332m^{3}\) (see Figure 11).
Footnote 7: The holy city of Chogha Zanbil (also known as _Dur-Untash_, or City of Untash, in Elam) located at \(38km\) to the southeast of Susa was founded by the Elamite king _Untash-Napirisha_ (1275–1240 BCE) to be the religious center of Elam. The principal element of this complex is an enormous ziggurat dedicated to the Elamite divinities Inshushinak and Napirisha. For more information, see [https://whc.unesco.org/en/list/113/](https://whc.unesco.org/en/list/113/) or [http://tchoghazanbil.com/](http://tchoghazanbil.com/).
Although as yet there is no satisfactory explanation for how the hydraulic mechanism of this sophisticated piece of engineering worked, scholars have suggested different hypotheses. Some scholars including Ghirshman believed that this reservoir was fed
Figure 10: The hypothetical course of the Harmushi canal network
by the Karkheh river via the Harmushi canal, which was specially built to supply the new holy city (see [14, 15]). Drinking water was then thought to be distributed throughout the city by a system of smaller canals.
While Ghirshman's opinion prevailed for decades, modern archaeological research has raised serious doubts about his analysis (see [16]). Mofidi-Nasrabadi in [17] has suggested that this water reservoir is actually a part of a drainage system devised by the Elamite engineers to remove floodwater from the holy city during the rainy season.
In either case, the mathematical knowledge of Elamite scribes might have been of assistance in the construction of such a complex structure. Their mathematical skills could have been applied to estimate the amount of time and number of workers needed for such a large engineering project.
## 4 Conclusion
The interpretation of problems contained in **SMT No. 24** and **SMT No. 25** taken together with the archaeological research confirming the existence of a substantial canal network at the time the problems were inscribed on the tablet, strongly infer that the Susa scribes were interested in the practical application of mathematics to the challenges faced by those living alongside them. These texts suggest that not only was mathematics taught in a "Susa School of Mathematics", but also that the scribes used their mathematical skills to address issues arising in the design and construction of structures, such as canals, which facilitated both the agricultural and economic development of ancient Elam.
Figure 11: A water reservoir at the holy city of Chogha Zanbil (Credit: The World Heritage of Chogha Zanbil) | この文章では、Susa 数学的テキスト No. 24 と No. 25(SMT No. 24 と SMT No. 25)に存在する問題を研究します。これらの問題には、水路や穴などの掘削プロジェクトが含まれており、また、Susa の水路システムやChogha Zanbil のピラミッドの頂上にある貯水池など、Certain Elamite structure についても考察します。これらの構造の建設における幾何学的要素は、重要な役割を果たしている可能性があります。
Please note:
* I omitted the word "we" in the translation, which is grammatically correct, but could be potentially lost in translation.
* I changed "concerns excavation projects" to "include excavation projects" to make the sentence flow better.
* I changed "certain Elamite structures" to "Certain Elamite structure" to keep the sentence grammatically correct. |
2308.06935 | Insurance pricing on price comparison websites via reinforcement
learning | The emergence of price comparison websites (PCWs) has presented insurers with
unique challenges in formulating effective pricing strategies. Operating on
PCWs requires insurers to strike a delicate balance between competitive
premiums and profitability, amidst obstacles such as low historical conversion
rates, limited visibility of competitors' actions, and a dynamic market
environment. In addition to this, the capital intensive nature of the business
means pricing below the risk levels of customers can result in solvency issues
for the insurer. To address these challenges, this paper introduces
reinforcement learning (RL) framework that learns the optimal pricing policy by
integrating model-based and model-free methods. The model-based component is
used to train agents in an offline setting, avoiding cold-start issues, while
model-free algorithms are then employed in a contextual bandit (CB) manner to
dynamically update the pricing policy to maximise the expected revenue. This
facilitates quick adaptation to evolving market dynamics and enhances algorithm
efficiency and decision interpretability. The paper also highlights the
importance of evaluating pricing policies using an offline dataset in a
consistent fashion and demonstrates the superiority of the proposed methodology
over existing off-the-shelf RL/CB approaches. We validate our methodology using
synthetic data, generated to reflect private commercially available data within
real-world insurers, and compare against 6 other benchmark approaches. Our
hybrid agent outperforms these benchmarks in terms of sample efficiency and
cumulative reward with the exception of an agent that has access to perfect
market information which would not be available in a real-world set-up. | Tanut Treetanthiploet, Yufei Zhang, Lukasz Szpruch, Isaac Bowers-Barnard, Henrietta Ridley, James Hickey, Chris Pearce | 2023-08-14T04:44:56 | http://arxiv.org/abs/2308.06935v1 | # Insurance pricing on price comparison websites via Reinforcement Learning
###### Abstract
The emergence of price comparison websites (PCWs) has presented insurers with unique challenges in formulating effective pricing strategies. Operating on PCWs requires insurers to strike a delicate balance between competitive premiums and profitability, amidst obstacles such as low historical conversion rates, limited visibility of competitors' actions, and a dynamic market environment. In addition to this, the capital intensive nature of the business means pricing below the risk levels of customers can result in solvency issues for the insurer. To address these challenges, this paper introduces reinforcement learning (RL) framework that learns the optimal pricing policy by integrating model-based and model-free methods. The model-based component is used to train agents in an offline setting, avoiding cold-start issues, while model-free algorithms are then employed in a contextual bandit (CB) manner to dynamically update the pricing policy to maximise the expected revenue. This facilitates quick adaptation to evolving market dynamics and enhances algorithm efficiency and decision interpretability. The paper also highlights the importance of evaluating pricing policies using an offline dataset in a consistent fashion and demonstrates the superiority of the proposed methodology over existing off-the-shelf RL/CB approaches. We validate our methodology using synthetic data, generated to reflect private commercially available data within real-world insurers, and compare against 6 other benchmark approaches. Our hybrid agent outperforms these benchmarks in terms of sample efficiency and cumulative reward with the exception of an agent that has access to perfect market information which would not be available in a real-world set-up.
## 1 Introduction
The rise of price comparison websites (PCWs) has transformed the general insurance industry in the UK, granting consumers extensive access to diverse insurance options, particularly, for home and motor insurances. These platforms consolidate premiums, coverage details, and policy terms from multiple insurers, enabling users to make informed decisions. This has intensified competition among insurers, compelling them to offer competitive pricing and attractive policy features. Online insurance pricing has become an indispensable component of insurers' business strategies, shaping their market presence and overall success. Current industry standards are to leverage supervised learning models, trained offline on historic data, to feed into an online optimiser. This can lead
to a lot of maintenance as the system requires many models to be maintained and keeping pace with the dynamic market is difficult as information relating to market prices typically becomes available weeks after a quote. At this point, the pricing behaviours of the market may have drifted significantly - for example, it is not uncommon for insurers to perform multiple targeted price changes within a single week.
Objectives and challenges in online insurance pricing.Insurers operating on price comparison platforms face unique challenges in determining online insurance pricing. These challenges stem from the delicate task of finding the right balance between offering competitive premiums and maintaining profitability, while taking into account customer preferences [30, 12], market dynamics, and regulatory obligations. Setting a high premium may increase revenue but risks customer rejection, while offering a lower premium than competitors may boost conversion rates on quotes but negatively impact profitability and solvency. The fact that insurers lack direct visibility into competitors' pricing strategies and the customers' price sensitivity at point of quote further complicates the search for the optimal pricing rule.
This paper studies the online pricing problem faced by an individual insurer. This problem can be described as a sequential decision process as follows. During a specific period, multiple customers arrive sequentially at the PCW. Each customer is characterised by a feature vector that includes information like age and claims information. The insurer determines the price to submit to the platform based on customer features, considering both the insurer's constraints and objectives as well as estimates of the customer's price preference [30]. Upon receiving quotes from all insurers, the customer decides whether to accept one of the offered prices. Market price coming from competitors are unknown at the point of quote. The latter information is available at a later date, post-policy start date for the customer, to avoid anti-competitive behaviours by insurers. This information comes in the form of aggregate market prices for a quote, e.g., market quantiles, and estimates of this information are often used in place of the actual hidden values during price optimisation.
One of the core items used in this price determination, is the insurers estimate of the customer lifetime value (CLTV) given the details provided. Insurers typically estimate CLTV values at multiple time-horizons, with 1st-year CLTV (CLTV1) reflecting expected profit excluding profits that may arrive via renewals. The insurer's objective is to devise an optimal pricing strategy that maximises the accumulated CLTV from arriving customers within the specified time period subject to constraints such as a target conversion rate. Depending on the customer's decision, the insurer either receives a positive reward equal to the estimated CLTV for the quoted price or no reward at all.
We propose an offline reinforcement learning (RL) algorithm [22, 8] to learn a pricing policy using a static dataset comprising historical quoted prices and their corresponding outcomes. Unlike traditional online RL approaches [29], the proposed offline RL approach avoids directly interacting with the PCW, and learns pricing strategies in a more cost-effective manner. Moreover, this offline approach enables a more flexible and controlled training/testing process compared to off-the-shelf methodologies. Traditional online RL algorithms rely on posing quotes on PCWs and refining pricing rules based on customer responses. Given the competitive nature of the UK Insurance market, where conversion rates are low and a large number of insurers compete for every quote, this results in a sparse feedback signal and potentially catastrophic pricing policies at the early-stages of learning for fully online algorithms. During the initial training stage, these online algorithms often yield inaccurate and unstable prices, posing significant financial and reputational risks for insurers.
However, developing offline RL algorithms for insurance pricing encounters several challenges
which we now describe. (i) Sparse reward: The low conversion rate of insurance products results in a low signal-to-noise ratio in the data. In practice, a successful insurance product may have less than 2% of quoted prices accepted by customers and generate positive rewards. This sparsity presents challenges for off-the-shelf RL algorithms, as they may struggle to train or require substantial amounts of data that may not be readily accessible. (ii) Partial observability: Insurers have limited visibility into the actions of other insurers on the PCW. (iii) Non-stationarity: Real-world data shows that the performance of calibrated pricing rules deteriorates over time, often within weeks, due to market dynamics. As a result, it is crucial to develop algorithms that can adapt quickly to changing market scenarios. (iv) Interpretability: "Black-box" model-free RL algorithms may produce pricing rules that lack interpretability and fail to meet regulatory expectations.
Although existing methodologies have been proposed in the literature to address individual aspects of these challenges for generic RL problems, they often require customisation to the specific problem setting for practical deployment. These customisations often take the form of novel reward design [6], architectures or training methodologies. Notably, there is a lack of published work that tailors these RL techniques specifically for the insurance pricing problem at hand. The main contribution in this areas focusses instead on pricing at renewals [16], formalising this setting as a constrained Markov decision problem with a coarse-coded state space. This setting is less competitive than the "New Business" PCW driven framework we address, avoiding the described sparsity issues as renewals rates are much higher than conversion rates on PCWs.
Our work.This paper proposes a novel framework for training and evaluating RL algorithms in insurance pricing using historical data. The learning problem is first transformed into a contextual bandit (CB) problem [4], assuming that customers arrive independently according to an unknown distribution, and insurer quotations and customer price responsiveness depend solely on customer provided features. Within this set-up, we ignore additional constraints provided and instead focus on providing prices to maximise profitability. In this case, we measure profitability with CLTV1. The framework allows for constraints to be incorporated into the training paradigm. These can also be applied during live predictions where conversion rates are often aligned by insurers using percentage changes applied to the final prices produced during the roll-out of new pricing rules. This simplifies the training process and allows for focusing on maximising immediate rewards based on the current customer features with longer-term expectations of customer profitability built into the reward design.
We then introduce a novel hybrid algorithm to solve the contextual bandit problem. This algorithm combines model-based and model-free RL methods as follows:
* First, a conversion model is estimated for each customer feature and market quantile price(s), capturing the customer responsiveness to different quoted prices. This model is trained using historical data over a relatively long time horizon, exploiting the price-driven nature of the markets to capture stable pricing patterns. The estimated conversion model plays a critical role in simulating customer reactions to prices quoted by RL algorithms, enabling effective algorithm training and evaluation. This alleviates issues surrounding limited exploration in historical batch-data that other approaches are adapted for [8].
* Next, the reward is reformulated as the _expected_ CLTV1 at the proposed price point, utilising the estimated conversion model. This transformation converts the sparse reward into a dense reward, enhancing the sample efficiency of the algorithm, and makes the system more interpretable [5]. In this set-up the agent is trained to act so as to maximise the expected in-year profit as estimated by the insurers traditional views of risk, margin and conversion
within the market. This is characterised by the reward which is determined by customer features used that are already live in existing pricing rules. This reduces the RL interpretability issue to a model interpretability issue, for which there are many paradigms [19, 28, 26], as all of the other components in the system design are currently used in existing pricing decisions and subject to interpretability constraints. Moreover, this design decision eliminates potential issues arising from uncertainties in longer-term CLTV estimates and makes the agent more risk-averse as it does not leverage expected reward many years later which are less certain to arrive and depend on the market at the point in the future.
* Lastly, a model-free approach is used to sequentially update the pricing policy based on customer features sampled from the training dataset. This eliminates the need to model the non-linear and non-stationary dependence of the market quantile price(s) on customer features, distinguishing it from traditional model-based pricing algorithms (see [2]). It also allows for dynamically updating the pricing rule as new data arrives, making the pricing rule adapt quickly to changes of the market dynamics, such as competitors re-calibrating their methodologies.
To the best of our knowledge, this is the first offline RL framework that addresses insurance pricing problems on competitive price comparison platforms while tackling the practical challenges of sparse reward, partial observation, and non-stationary market environments. It accounts for the realities of data availability in this setting, through the use of an offline conversion model that leverages market data available in the live setting, as well as increases transparency/interpretability through our specific reward design.
We further extend the above methodology to systematically evaluate pricing policies generated by RL algorithms using an offline testing dataset prior to their actual deployment. By applying this methodology, we demonstrate the superiority of the proposed algorithm compared to several off-the-shelf fully model-based and fully model-free RL algorithms.
The rest of this paper is organised as follows. Section 2 formulates the insurance pricing as a reinforcement learning/contextual bandit problem. Section 3 details the application of RL to this problem with numerical experiments on representative synthetic data, derived from real motor insurance quote data, provided in Section 4. Finally, Section 5 presents our conclusions and possible extensions and further work to build on our methodology.
Related works.RL has been applied to pricing problems but the literature is limited and sparse. For instance, [25] and [16] employed RL techniques to adjust prices for interdependent perishable products and insurance products at renewals, respectively. [24] utilised RL to determine dynamic prices in an electronic retail market. [1, 15, 20] applied RL to price consumer credit. It is important to note that all of these studies focused on the single-agent pricing problem within a stationary environment. In contrast, our paper addresses a more challenging pricing problem in a competitive multi-agent environment. Other related settings such as real-time bid-optimisation for online advertisements, often utilise a model-based approach [3], rely on modifying existing strategies [18] or use transfer-learning of supervised methods to optimise the price [13]. This last approach is very similar to the current standard for pricing engines within the market which is proving insufficient given the dynamic nature of the environment. As alluded to earlier, adapting RL techniques to this setting poses unique challenges, such as the low signal-to-noise ratio, partial observation of competitors' actions, and the non-stationarity of the underlying environment.
Problem formulation
This section formulates the new business insurance pricing problem as an offline learning problem.
Let \(\mathcal{X}\) be the set of all customer features, and \(\mathcal{A}\subset(0,\infty)\) be the agent's action space. In practice, the set \(\mathcal{A}\) can either be the agent's quoted price or the ratio of the quoted price with respect to a reference price (i.e. benchmark premium). In the latter instance, it is common for the action space to be discretised and to be restricted to a finite set of ratios in line with the insurer's discount/price increase appetite.
Let \(T\in\mathbb{N}\) be a given time horizon. For each pricing policy \(\phi:\{1,...,T\}\times\mathcal{X}\to\mathcal{A}\), the cumulative reward of the agent is given by
\[\mathbb{E}\left[\sum_{t=1}^{T}Y_{t}\,r(X_{t},\phi(t,X_{t}))\right], \tag{2.1}\]
where \(r(x,a)\) is the lifetime value for a customer with feature \(x\in\mathcal{X}\) when the price action \(a\in\mathcal{A}\) is offered, \(X_{t}\) is a random variable representing the feature of the customer arrived at time \(t\), \(\phi(t,X_{t})\) is the price quoted by the agent at time \(t\), and \(Y_{t}\in\{0,1\}\) is a random variable representing the customer's decision, i.e., \(1\) indicates the agent's offer is accepted by the customer and \(0\) indicates the offer is rejected. The agent aims to learn a policy \(\phi\) that maximises the cumulative reward (2.1):
\[\phi^{\star}\in\operatorname*{arg\,min}_{\phi:\{1,...,T\}\times\mathcal{X}} \mathbb{E}\left[\sum_{t=1}^{T}Y_{t}\,r(X_{t},\phi(t,X_{t}))\right]. \tag{2.2}\]
Note that in the above pricing problem, the customer's decision \(Y_{t}\) depends on the customer information \(X_{t}\), the agent's action/quoted price \(\phi(t,X_{t})\), and the prices offered by the other agents on the PCW. The agent does not know the exact distributions of \((X_{t},Y_{t})_{t=1}^{T}\). Instead, the agent has access to an offline dataset consisting of tuples \(\mathcal{D}=(x_{n},h_{n},a_{n},y_{n})_{n=1}^{N}\) for \(N\)-customers, where the components correspond to the customer features, quantile(s) of market prices quoted by other agents, historically quoted prices/actions taken, and customer decision, respectively. In our particular instance, only around \(2\%\) of \((y_{n})_{n=1}^{N}\) are non-zero. This results in a low signal-to-noise ratio, and creates a sparse reward in the objective function (2.1).
_Remark 2.1_.: In (2.1), we assume that the agent chooses the policy without considering constraints on the insurance portfolio. In practice, the pricing policy may also depend on the number of contracts already issued to a customer segment, target conversion rates and average premiums accepted. Let's take the example of an insurer wanting to cap the total number of accepted policies over the period \(T\). In this case, the cumulative reward can be modified into
\[\mathbb{E}\left[\sum_{t=1}^{T}Y_{t}\,r(X_{t},\phi(t,X_{t}))+g(S_{T})\right],\]
where \(S_{T}\in\mathbb{R}^{\mathcal{X}}\) represents the the number of converted quotes over \(T\) customers, and \(g:\mathbb{R}^{\mathcal{X}}\to\mathbb{R}\) represents the agent's preference for the target portfolio. The methodology developed herein can be easily adapted to this setting.
## 3 Proposed Hybrid Reinforcement Learning methodology
This section presents a RL framework for solving the pricing problem using the historical dataset \(\mathcal{D}\). Since one cannot directly interact with the true environment, there are several chal
lenges that need to be addressed. These include 1) creating an interactive environment for training RL algorithms, 2) designing a pricing system that can handle the non-stationarity of the market, and 3) assessing the performance of a RL agent offline. We mitigate these challenges by integrating model-based and model-free methods into a **hybrid methodology**.
Contextual bandit problem for insurance pricing.We begin by noting the objective in Eq. (2.1) is the classic RL objective with discount factor [29] set to 1. To facilitate training an agent, we make the following assumptions on the random variables \((X_{t},Y_{t})_{t=1}^{T}\) in (2.1):
1. The customer features \((X_{t})_{t=1}^{T}\) are independently and identically distributed.
2. For each \(t\in\{1,\ldots,T\}\), the market quantile price(s) \(H_{t}\) of all agents takes values in a space \(\mathcal{H}\), and follows the (conditional) distribution \(\psi^{h}(\cdot|X_{t})\). Note, multiple quantile prices, or average values, are often made available and used by pricing rules engines. This does not impact our methodology.
3. For each \(t\in\{1,\ldots,T\}\), the customer's decision \(Y_{t}\) is a conditional Bernoulli random variable. More precisely, for each \(t\in\{1,\ldots,T\}\), let \(X_{t}\in\mathcal{X}\) be the customer features, \(A_{t}=\phi(t,X_{t})\in\mathcal{A}\) be the agent price action and \(H_{t}\in\mathcal{H}\) be the market quantile price(s) of other agents (this is unknown in live but can be used for offline training). Then \(Y_{t}=1\) with probability \(p(X_{t},H_{t},A_{t})\) and \(0\) otherwise, where \(p:\mathcal{X}\times\mathcal{A}\times\mathcal{H}\to[0,1]\) is a deterministic function independent of \(t\). The function \(p\) is often referred to as the customer **conversion model**.
These modelling assumptions imply that it suffices to find a time-independent policy \(\phi^{\star}:\mathcal{X}\to\mathcal{A}\) that maximises the one-step reward:
\[\phi^{\star}\in\operatorname*{arg\,min}_{\phi:\mathcal{X}\to\mathcal{A}} \mathbb{E}\left[Y_{t}\cdot r(X_{t},\phi(X_{t}))\right]. \tag{3.1}\]
This simplifies the optimisation problem (2.2) into a CB problem. The training dataset \(\mathcal{D}\) consists of customers features \(x\), market quantile price(s) \(h\), the pricing action taken by the insurer \(a\), along with the customer's decision \(y\).
To deploy an RL learning algorithm that solves (3.1) we require accessing customers' actions to prices quoted via the proposed training algorithm. Because training in the online setting (i.e using a deployed algorithm) would be prohibitively expensive (and risky), one needs to augment the training data set using simulations. To simulate the customer's response to a given quoted price, we fix a dataset \(\mathcal{D}_{\text{train}}=(x_{n},h_{n},a_{n},y_{n})_{n=1}^{N_{\text{train}}} \subset\mathcal{D}\), and estimate a conversion function \(p\) by minimising a suitable loss function
\[\hat{p}=\operatorname*{arg\,min}_{p_{\theta}}\frac{1}{N_{\text{train}}}\sum_{ n=1}^{N_{\text{train}}}\ell(y_{n},p_{\theta}(x_{n},h_{n},a_{n})),\]
over certain parametric models \(p_{\theta}:\mathcal{X}\times\mathcal{H}\times\mathcal{A}\to[0,1]\) such that \(a\mapsto p_{\theta}(x,h,a)\) is non-increasing for all \((x,h)\in\mathcal{X}\times\mathcal{H}\). This monotonicity constraint is applied to the estimated model \(p_{\theta}\) as the true conversion probability \(p(x,h,a)\) decreases as the quoted price \(a\) increases.
_Remark 3.1_.: Note that although the market quantile price(s) is(are) dependent solely on the customer features, our conversion model incorporates both the customer feature and the market quantile price(s) explicitly. This is because customer behaviour, given quoted prices, is expected to be stable over a long period of time. By including the market quantile price(s) in our conversion model, we can fit the model using historical data over a relatively long time horizon and encode market dynamics into the agent via the simulations.
Given the fitted conversion model \(p_{\theta}\) using the data \(\mathcal{D}_{\text{train}}=(x_{n},h_{n},a_{n},y_{n})_{n=1}^{N_{\text{train}}}\subset \mathcal{D}\) (which may span a large time-horizon), we create a data set \(\tilde{\mathcal{D}}_{\text{train}}\) based on more recent data points \((x_{n},h_{n})_{n\geq 1}\) to reflect current market conditions/customers. The construction is as follows:
1. At each time \(t\), sample \((x,h)\) randomly from the data set \(\tilde{\mathcal{D}}_{\text{train}}\).
2. Submit a price from the pricing action \(a\) generated from the agent's pricing rule \(\phi:\mathcal{X}\to\mathcal{A}\).
3. Sample \(U\sim\text{Unif}(0,1)\), and the agent observes the customer decision defined by \(y=\mathbf{1}\big{(}U\leq\hat{p}(x,h,a)\big{)}\).
4. The agent collects the reward, e.g., \(y\,r(x,a)\).
This constitutes the simulator set-up used in training and evaluating agents, more details on our specific simulator environment(s) are provided in Section 4.
Training RL algorithms with dense reward.Due to the low conversion rate, the majority of simulated customer decisions will be zero, which makes it inefficient to train a RL/CB agent based solely on that binary outcome. It also makes interpretability more difficult as variance in the simulated outcome may be a dominating factor during training. Here we reformulate the problem (3.1) to one with dense rewards using the estimated conversion probability \(\hat{p}\). We start by observing that for any given pricing rule \(\phi:\mathcal{X}\to\mathcal{A}\), by the tower's property of the conditional expectation, the reward in (3.1) is equivalent to
\[\begin{split}\mathbb{E}\left[Y_{t}\cdot r(X_{t},\phi(X_{t})) \right]&=\mathbb{E}\left[\mathbb{E}[Y_{t}|X_{t},H_{t},\phi(X_{t}) ]r(X_{t},\phi(X_{t}))\right]\\ &=\mathbb{E}\left[p(X_{t},H_{t},\phi(X_{t}))r(X_{t},\phi(X_{t})) \right],\end{split} \tag{3.2}\]
where \(p:\mathcal{X}\times\mathcal{H}\times\mathcal{A}\to[0,1]\) is the customer's true conversion probability, depending on the customer's features \(X_{t}\), the market quantile price(s) \(H_{t}\), and the agent's quoted price/price action \(\phi(X_{t})\). Substituting the true conversion model \(p\) with the estimated model \(\hat{p}\) yields the following approximation of (3.2):
\[\mathbb{E}\left[Y_{t}\cdot r(X_{t},\phi(X_{t}))\right]\approx\mathbb{E}\left[ \hat{p}(X_{t},H_{t},\phi(X_{t}))r(X_{t},\phi(X_{t}))\right]. \tag{3.3}\]
This modified reward \(\hat{R}_{t}=\hat{p}(X_{t},H_{t},\phi(X_{t}))r(X_{t},\phi(X_{t}))\) is non-zero for the majority of customer contexts/features and resolves the sparsity issue of the original reward design.
Based on the reformulation (3.3), many existing RL algorithms can be employed to search the optimal policy. In the following, we present the actor-critic algorithm as an example, see Alg. 1. The probabilistic actor policy \(\pi_{\theta_{m}^{a}}\) from this algorithm is used as our pricing policy \(\phi\) in live either directly or in a greedy-fashion by taking the argmax over the action space. Possible alternatives include the asynchronous advantage actor-critic (A3C) algorithm [17], the TD3 algorithm [9], the proximal policy optimisation (PPO) algorithm [27] as well as simpler algorithms such as DQN [21] and its variants.
Summarising the hybrid methodology.The approach described in this Section combines the benefits of both model-based and model-free methods. It utilises a model-based approach to transform the sparse reward into a dense reward, thereby improving the sample efficiency of fully model-free algorithms. At the same time, it employs a model-free approach to update the policy sequentially based on the customer features and simulated dense reward. This approach differs from fully model-based algorithms where modelling the dependence of the market quantile price(s)
on the customer features, i.e., \(\psi^{h}\). This model-based approach has several drawbacks. Since the market quantile price(s) typically depend nonlinearly on the customer features, it is challenging to choose an appropriate architecture to accurately approximate it. Additionally, historical data indicates that modelling this relationship between customer features and aggregated prices can degrade in performance quickly, and are often rebuilt or recalibrated as regularly as monthly where possible. In contrast, the proposed hybrid method only requires quoted quantile price(s) \((h_{t})_{t\geq 0}\), which can be dynamically updated as new data arrives with little additional complexity.
Evaluating agent performance in a consistent manner.After a pricing policy has been trained, its performance can be evaluated by constructing a new market simulator using a test dataset. However, when comparing pricing policies generated by different RL/CB algorithms using historical data, it is crucial to ensure consistency in customer behaviour. Specifically, it is important to ensure that if a customer accepts a quoted price, they will also accept any other quoted price that is lower. To achieve this, let \(\phi_{1}\) and \(\phi_{2}\) be two different pricing rules, and let \(\hat{p}:\mathcal{X}\times\mathcal{H}\times\mathcal{A}\rightarrow[0,1]\) be an estimated conversion model. Then, for a given customer feature \(x\) and market quantile price \(h\), the customer's decision under each pricing rule can be simulated by first sampling \(U\sim\text{Unif}(0,1)\), and defining the decision as \(y_{i}=\mathbf{1}\big{(}U\leq\hat{p}(x,h,\phi_{i}(x))\big{)}\), \(i=1,2\). Note that the two decisions are determined by the same random sample \(U\), enabling a fair comparison among different algorithms being evaluated. In reality, customers may react randomly to quoted prices - this results in large variances and require lots of additional computational cost to determine if an algorithm is performing better than an alternative.
## 4 Numerical experiments
To demonstrate our methodology, we generate synthetic dataset that is reflective of a private commercial PCW data made available to the authors1. Using this synthetic data, we compare a series of agents vs an actor-critic trained offline using our hybrid methodology. We now discuss the construction of this data, along with the underlying assumptions about the real environment that informs it, before describing the set-up and results of our numerical experiments.
Understanding how to simulate the market.Analysing real-world PCW data, we identified the key relationship determining whether a customer converted was between the final quoted premium (\(P\)), the average top 5 price they received and the average prices of the insurers ranked 6-10 on the PCW. The latter two quantities are the market quantile prices discussed throughout and we denote them by Avg. Top5 and Avg. Top6-10 respectively. The final quoted premium was normalised using this market quantile information to produce a normalised price \(z\) as follows:
\[\text{z}=\frac{P-\text{Avg. Top5}}{\text{Avg. Top6-10-Avg. Top5}}. \tag{4.1}\]
This normalised price measures how competitive our quoted premium is relative to our competitors in the market. From analysing the real-world data, the conversion probability given this normalised price, \(p(z)\), was found to follow this distribution:
\[p(z)=\begin{cases}0.2&:z<-8,\\ -0.2(z/8+1)^{2}+0.2&:-8\leq z<0,\\ 0&:z\geq 0.\end{cases} \tag{4.2}\]
The plot of this demand, \(p(z)\), is shown Figure 1 (left). We conclude this portion by noting the core assumption derived from analysing the PCW data is that our expected true conversion model depends primarily on the normalised price \(z\), independent of other customer features. These features provide higher order corrections to the model but are not the primary drivers given the actual market quantile prices.
Construction of synthetic dataset.A synthetic dataset of 35000 customers on a PCW, with 16 customer features (spanning age to credit risk data) along with simulated Avg. Top5 prices for each of these customers was provided for direct modelling. The Avg. Top5 prices provided are derived from these 16 customer features. The distributions of these features and market price were created to reflect realistic PCW data.
This dataset was augmented to generate a benchmark premium (\(P_{0}\)), Avg. Top6-10 and "burn" cost (\(b\)). This augmentation was done by modifying the Avg. Top5 price provided and hence depends indirectly on the customer features, \(x\). This augmentation was performed via
Figure 1: Exact and estimated conversion models. Left: The exact conversion probability as a function of the normalised price; Right: Comparison between the empirical conversion model (blue curve) and the estimated conversion model (orange curve) for different normalised prices.
the use of scaling factors randomly sampled from normal distributions. The scaling factors were chosen to reflect the magnitude of the conversion rate of the original dataset and the shape of the demand curve.
Comparing to real data, we found the following calculation reflected these real-world constraints:
\[b(x) =\text{Avg. Top}5\times N(0.8,0.2),\] \[P_{0}(x) =\text{Avg. Top}5\times N(1,0.1),\] Avg. Top6-10 =\text{Avg. Top}5\times|N(1,0.3)|,\]
where \(N(\mu,\sigma)\) denotes a normal random variable with mean \(\mu\) and standard deviation \(\sigma\).
The synthetic final quoted premium \(P\) will then be generated as follows:
\[P=\text{Avg. Top}5\times N(1,0.3). \tag{4.3}\]
From these parameters, we can define the observed action taken by the insurer to be the ratio \(\frac{P}{P_{0}}\). We abuse our notation slightly to denote this ratio as a price action \(a\) with the reward function \(r\), for a customer with features \(x\), (2.2) then being given by:
\[r(x,a)=a\times P_{0}(x)-b(x)=P(x,a)-b(x). \tag{4.4}\]
This is effectively the profit in year one, i.e., the CLTV1 for this customer.
We split the initial 35000 customers into two sets - 28000 in train and 7000 in test. We then generate \(5\times 10^{6}\) samples (with replacement) from the training data set. For each sample, the premium \(P\) is generated and the customer decision is determined via the conversion probability \(p(z)\). This leads to a complete training data set \(\mathcal{D}_{\text{train}}\) with \(5\times 10^{6}\) data points. Using this dataset we construct a training simulator.
Construction of the training environment.We now construct the training environment based on the training data set, without using \(p(z)\) nor the testing data set. This represents how one can use their own conversion data to create a market simulator directly from the data.
We start by estimating the demand curve from the data. We split the training data \(\mathcal{D}_{\text{train}}\) (of size \(5\times 10^{6}\)) into bins according to the size of the \(z\) rounded to the second decimal place.
Let \(p_{R}\) be the empirical conversion probability within the \(R\)-th bin. The estimated conversion model \(\hat{p}\) is then defined by
\[\tilde{p}_{R}=\frac{1}{100}\sum_{n=1}^{100}p_{R+0.01n-0.5},\quad\hat{p}_{R} \ \ =\min(\hat{p}_{R-0.01},\tilde{p}_{R}), \tag{4.5}\]
where we first smooth the empirical conversion probability using a moving average, and then take the minimum between consecutive bins to ensure the estimated model decreases with respect to the normalised price. For simplicity, we extrapolate the estimated model outside the support of training data, by setting \(\hat{p}_{R}=\hat{p}_{-6}\) for \(R<-6\) and \(\hat{p}_{R}=\hat{p}_{6}\) for \(R>6\). The function \(\hat{p}_{R}\) will be used as an estimated conversion probability to generate customer decisions in the training environment. Figure 1(right) shows \(p_{R}\) (in blue) and \(\hat{p}_{R}\) (in orange) for \(R\in[-6,6]\). In practice, for a given \(z\) this is mapped to a bucket \(R\) and \(\hat{p}_{R}\) represents the expected conversion for that normalised price. This will be denoted \(\hat{p}(z)\) from the coming paragraphs to emphasise that it is a mapping from \(P\) (and hence \(z\)) to an estimated conversion with the bins serving as a useful intermediate grouping for the model.
Training RL/CB Agents.Using the estimated conversion, \(\hat{p}(z)\), we train two types of agent. Both agents are trained in the same manner, using the actor-critic algorithm presented in 1, but they differ in their reward function. The first agent is a standard RL/CB agent trained using a sparse reward while the second uses our hybrid methodology with the associated dense reward. At each iteration of the training process a customer - along with their features \(x\), Avg. Top5, Avg. Top6-10, \(P_{0}\) and \(b\) are sampled. The agents take price scaling actions. These actions are in the range \([0.7-1.3]\) and are discretised into \(600\) actions where the separation is \(0.001\) between each action, i.e., \(a\in\{0.7,0.7001,...,1.2999,1.3\}\). This range of actions reflects the range of price scalings an insurer would typically consider, to scale up \(P_{0}\) to the final quoted premium \(P\).
For the **standard RL agent**, the reward function for this customer at iteration \(t\geq 1\), is given by:
\[r(x_{t},a_{t})=Y_{t}\times(a_{t}\times P_{0}(x_{t})-b(x_{t})), \tag{4.6}\]
where \(Y_{t}\) is the customer decision simulated from \(\hat{p}(z_{t})\) and \(x_{t}\), \(a_{t}\) and \(z_{t}\) are the customer features, agent action and the normalised price at time \(t\).
The second **hybrid RL agent** incorporates the estimated conversion model in the reward. In this case, it updates the policy using the following reward:
\[r(x_{t},a_{t})=\hat{p}(z_{t})\times(a_{t}\times P_{0}(x_{t})-b(x_{t})). \tag{4.7}\]
Benchmark pricing strategies.In addition to these agents, we also consider several benchmark pricing strategies to compare the RL/CB agents against. The actor-critic agents cannot obtain the Avg. Top5 and the Avg. Top6-10 values from the data when providing prices and instead indirectly infer them through the reward. As a benchmark, we compare them to model-based agents who've access to these quantities with some small systematic errors. Specifically, we consider the following three scenarios:
**Unbiased Estimation -** the agent estimates the market by:
Unbiased estimated average top \(5=\text{Avg. Top5}\times N(1,0.3)\)
Unbiased estimated average top \(6\text{-}10=\text{Avg. Top6-}10\times N(1,0.3)\)
**Over Estimation -** the agent estimates the market by:
Over-estimated average top \(5=\text{Avg. Top5}\times N(1.2,0.3)\)
Over-estimated average top \(6\text{-}10=\text{Avg. Top6-}10\times N(1.2,0.3)\)
**Under Estimation -** the agent estimates the market by:
Under-estimated average top \(5=\text{Avg. Top5}\times N(0.8,0.3)\)
Under-estimated average top \(6\text{-}10=\text{Avg. Top6-}10\times N(0.8,0.3)\)
These scenarios define an additional 3 model-based agents to compare against. In all instances, the agents compute an estimate of the normalised price \(\tilde{z}\) in accordance with Eq. (4.1) but using their estimated market Avg. Top5 and Avg. Top6-10 values in place of the actual values, that is:
\[\tilde{z}=\frac{P-\text{Estimated Avg. Top5}}{\text{Estimated Avg. Top6-}10-\text{Estimated Avg. Top5}}.\]
Using this estimated normalised price and estimated conversion \(\hat{p}\), the final price offered at time \(t\) is yielded via maximising the map:
\[P\mapsto\hat{p}(\tilde{z})\times(P-b(x_{t})).\]
Finally, we consider 2 extreme pricing rules. We consider a random agent that quotes a price generated by randomly selecting from the action set. This random agent serves as a worst-case benchmark to evaluate the performance of other pricing policies. We will also consider a perfect information agent where at each time \(t\), this agent maximises the expected reward:
\[P\mapsto p(z)\times(P-b(x_{t})),\]
using the true conversion model \(p\) defined in (4.2), and the actual Avg. Top5 and Avg. Top6-10 market prices. Although this perfect information agent cannot be implemented in practice (since the exact customer conversion model is unknown as are the market quantile prices in live), it serves as the best-case benchmark for the proposed hybrid RL agent. In conclusion, we have 7 agents to evaluated - 6 along with our hybrid agent.
Performance evaluation of RL agents and results.Let \(\phi_{i}\), \(i=1,2,\ldots,7\), denote the pricing rules generated by the standard RL agent, the hybrid RL agent, the unbiased model-based agent, the over-estimated model-based agent, the under-estimated model-based agent, the random agent and the perfect-information agent, respectively.
The following Algorithm 2 summarises the procedure to evaluate the performance of these pricing rules using a test data set \(\mathcal{D}_{\text{test}}\) and the true conversion model \(p(z)\) in (4.2).
```
Input: Pricing rules \((\phi_{i})_{i=1}^{7}\), true conversion model \(p\) for the market, testing data set \(\mathcal{D}_{\text{test}}\) of size \(N_{\text{test}}\).
1for\(t=1,2,\ldots,N_{\text{test}}\)do
2 Sample an entry from \(\mathcal{D}_{\text{test}}\), containing the customer features \((x_{t})\), the Avg. Top5 and Avg. Top6-10 prices, and the cost \(b\) given the customer features.
3 All agents quote their prices, denoted by \((P_{i})_{i=1}^{7}\). The perfect information agent uses the entire entry and the exact conversion model, and the other agents only uses the customer feature.
4 Sample \(U\sim U[0,1]\) and store this value.
5for\(i=1,\ldots,7\)do
6 Compute the normalised price \(z_{i}\) from \(P_{i}\) along with the associated conversion probability \(p_{i}=p(z_{i})\) where \(p\) is defined in (4.2).
7 Record the expected reward, \(p_{i}\times(P_{i}-b(x_{t}))\).
8 Record the realised reward, \(\mathbf{1}(U<p_{i})\times(P_{i}-b(x_{t}))\).
9 end for
10
11 end for
```
**Algorithm 2**Performance evaluation
Figure 2 compare the cumulative expected rewards and the cumulative realised reward for all agents. Although the proposed hybrid RL (brown line) underperforms the (unrealistic) perfect information agent, it clearly outperforms the remaining 5 agents, including the standard model-free RL agent (purple line). Unsurprisingly, the random agent makes losses very quickly. This highlights the need to avoid highly exploratory behaviour that frequently occurs at the early-stages of training an online RL agent. Furthermore, we observe the rate at which the hybrid agent accumulates reward outperforms all other agents, again with the exception of the perfect information agent. This demonstrates the improved sample efficiency from use of our hybrid approach compared to more traditional pricing strategies/benchmarks.
## 5 Conclusions and Future Work
In this paper, we formulated the problem of pricing insurance at new business on a price comparison site as a RL problem. We addressed the cold-start problem, interpretability, reward sparseness and partial observability of a non-stationary market by integrating model-based and model-free RL methods. The model-based component creates a dense interpretable reward and allows for the creation of an effective market simulator. This simulator allows us to train model-free RL/CB methods offline, with these methods learning the market dynamics implicitly from the simulator by-passing both the cold-start and observability issues when run in live.
We evaluated this approach on representative synthetic data derived from real-world PCW data. Both the expected and realised CLTV1 for the hybrid agent performed better than the benchmarks, with the exception of the (unrealistic) agent which has perfect information on the market environment. Our approach is readily extendible to scenarios where the constraints on the agent's behaviour, such as the rate of conversion, apply. This can be achieved via an adjustment in the reward function. We expect future work in this area to examine (i) the potential use of 'Multi-objective RL' techniques where an agent has multiple competing objectives [14, 31] and constraints. (ii) Incorporate uncertainty estimates into the agent and its exploration policy [10, 11]. (iii) Improvements to the market simulator with direct incorporation of time-dynamics, usage of agent based simulation [32, 7] as well as an associated empirical study on the effectiveness of various RL/CB algorithms with this simulator.
## 6 Authors' Contributions
All authors conceptualized the study. TT, YZ and LS proposed the methodology, and TT conducted the numerical experiments. TT, YZ and LS wrote the manuscript, and all authors critically revised the manuscript.
| 価格比較サイト (PCW) の出現により、保険会社には、効果的な価格戦略を策定するためのユニークな課題が生じました。PCW で運営するには、競争価格と収益性をバランスを取る必要がある、低 historique 変換率、競合他社の行動の可視性などの障害がある、そして、市場環境がダイナミックであるという複雑な問題に直面しています。さらに、事業の資本 intensive な性質により、顧客のリスクレベルを下回る価格設定は、保険会社の solvency への影響を及ぼす可能性があります。これらの課題に対処するため、本論文では、モデルベースとモデルなしの両方の方法を統合して最適な価格設定ポリシーを学習するための強化学習 (RL) フレームワークを紹介します。モデルベースのコンポーネントは、オフライン設定で代理人を訓練し、冷スタートの問題を回避する一方で、モデルなしのアルゴリズムは、状況の bandit (CB |
2305.09422 | Rotating quantum droplets confined in a harmonic potential | We investigate the rotational properties of a two-component, two-dimensional
self-bound quantum droplet, which is confined in a harmonic potential and
compare them with the well-known problem of a single-component atomic gas with
contact interactions. For a fixed value of the trap frequency, choosing some
representative values of the atom number, we determine the lowest-energy state,
as the angular momentum increases. For a sufficiently small number of atoms,
the angular momentum is carried via center-of-mass excitation. For larger
values, when the angular momentum is sufficiently small, we observe vortex
excitation instead. Depending on the actual atom number, one or more vortices
enter the droplet. Beyond some critical value of the angular momentum, however,
the droplet does not accommodate more vortices and the additional angular
momentum is carried via center-of-mass excitation in a "mixed" state. Finally,
the excitation spectrum is also briefly discussed. | S. Nikolaou, G. M. Kavoulakis, M. Ogren | 2023-05-16T13:27:01 | http://arxiv.org/abs/2305.09422v2 | # Novel superfluid states in rotating quantum droplets confined in a harmonic potential
###### Abstract
We investigate the rotational properties of a two-dimensional self-bound quantum droplet, which is confined in a harmonic potential and compare them with the well-known problem of a single-component atomic gas, with contact interactions. For a fixed value of the trap frequency, choosing some representative values of the atom number, we determine the lowest-energy state, as the angular momentum increases. For a sufficiently small number of atoms, the angular momentum is carried via center-of-mass excitation. For larger values, when the angular momentum is sufficiently small, we observe vortex excitation, instead. Depending on the actual atom number, one, or more vortices enter the droplet. Beyond some critical value of the angular momentum, however, the droplet does not accommodate more vortices and the additional angular momentum is carried via center-of-mass excitation in a novel, "mixed" state. Finally, the excitation spectrum is also briefly discussed.
pacs: 03.75.Lm, 05.30.Jp, 67.85.-d
## I Introduction
The rotational properties of trapped atomic Bose-Einstein condensates is a problem which has been studied very extensively in the last decades. Most of these studies have been performed in a harmonic potential, since this has been by far the most common form of confining potential that is used in experiments. We stress that the literature on this problem is very extensive, so we simply refer to some review articles [1; 2; 3; 4; 5].
The interatomic interactions are modelled as an effective hard-core potential. This potential is proportional to the so-called scattering length, which describes the elastic, s-wave atom-atom collisions. In the single-component condensates, when this effective interaction is repulsive (i.e., the scattering length is positive), as the angular momentum increases, vortices enter the cloud from its periphery and eventually a vortex lattice forms. When the angular momentum increases even more, the system reaches the so-called limit of "rapid rotation", where the mean-field approximation fails. The cloud enters a highly-correlated regime, and its many-body state resembles a (bosonic) Laughlin-like state. On the other hand, when the effective interaction is attractive (i.e., the scattering length is negative), the cloud is unstable against collapse if there is no trapping potential. Still, the system may be in a metastable state due to the trap. In this case, the cloud carries its angular momentum via center-of-mass excitation of the ground (non-rotating) state.
More recently Petrov [6] predicted the existence of "quantum droplets". This is a very interesting problem and has attracted a lot of attention, see, e.g., the review articles [7; 8], and Refs. [9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28]. Interestingly enough, such droplets have been observed experimentally in mixtures of Bose-Einstein condensed gases [29; 30; 31; 32; 33] and in single-component gases with strong dipolar interactions [34; 35; 36; 37; 38; 39]. The basic idea in the case where droplets are formed from mixtures, is that by tuning the inter- and intra-species effective interactions, the mean-field interaction energy may become as small as we wish. In this case the next-to-leading-order correction of the energy (i.e., the so-called "Lee-Huang-Yang" term) [40], becomes comparable with the usual mean-field term and the two terms may balance each other, giving rise to self-bound droplets, even in the absence of any trapping potential.
Self-bound droplets belong to the class of systems which are superfluid. It is thus natural to examine their rotational properties. Compared with the problem of single-component atomic Bose-Einstein condensates, there are two main differences, which introduce novel effects in their superfluid properties. First of all, as we saw earlier, while quantum droplets are self-bound and do not require any trapping potential, in the case of single-component atomic condensates, the presence of a confining potential is absolutely necessary. Secondly, in quantum droplets, the sign of the nonlinear term depends on the density, being attractive for sufficiently low densities and repulsive, for higher densities. On the other hand, in single-component condensates the interaction is modelled as a hard-core potential and it is either (purely) repulsive, or (purely) attractive.
As we explain below, the question of how a quantum droplet carries angular momentum is essentially trivial when there is no external confining potential. On the other hand, it becomes novel and interesting when the droplet is confined in a trapping potential [20; 27]. This is precisely the problem that we investigate below. More specifically, we consider a harmonically-trapped two-dimensional "symmetric" droplet, i.e., equal masses \(M\) for each species, and an equal number of atoms, \(N/2\), in the two components. We minimize the energy under a fixed value of the total angular momentum \(L\hbar\), and a fixed value of the total atom number, \(N\), of the droplet.
According to the results of our study, the combination of a (harmonic) trapping potential with the more "complex" nonlinear term introduces a very serious difference in the rotational response of a droplet, as compared with the case of contact interactions. For a sufficiently small \(N\) the droplet executes center-of-mass rotation. For larger \(N\) and small \(L\) the droplet develops surface waves and eventually a single vortex enters the droplet. With in
creasing \(L\), depending on the value of \(N\) more vortices may enter the cloud, up to some critical value of \(L\). Beyond this value, it is no longer energetically favourable for the droplet to accommodate more vortices. The additional angular momentum is then carried via center-of-mass excitation, in a novel, "mixed" state.
In Sec. II we present the model that we use. Then, in Sec. III we present and analyse our results for some representative values of \(N\) and various values of \(L\). In Sec. IV we present the general picture that results from our analysis. In Sec. V we present some results from the excitation spectrum that we have found. In Sec. VI we investigate the experimental relevance of our results. Finally, in Sec. VII we summarize the main results of our study and compare the present problem with the "traditional" one, i.e., that of a single-component with an (attractive, or repulsive) effective contact interaction.
## II Model
Assuming that there is a very tight confining potential along the axis of rotation, we consider motion of the atoms in the perpendicular plane, i.e., two-dimensional motion. We also assume that the quantum droplet is confined in a two-dimensional harmonic potential
\[V(\rho)=\frac{1}{2}M\omega^{2}\rho^{2}, \tag{1}\]
where \(\omega\) is the frequency of the harmonic potential and \(\rho\) is the radial coordinate in cylindrical-polar coordinates.
As mentioned also above, we consider the "symmetric" case, where we have equal populations of atoms \(N/2\) in the two components, and equal mass \(M\) of the two species. In this case the system is described by a single order parameter \(\Psi(\rho,\theta)\), where \(\theta\) is the angle in cylindrical-polar coordinates. Working with fixed \(L\) and \(N\), we minimize the following extended energy functional, [41] which, in dimensionless units, takes the form [28; 9]
\[\mathcal{E}(\Psi,\Psi^{*})=\] \[=\int\left(\frac{1}{2}|\nabla\Psi|^{2}+\frac{1}{2}\omega^{2}\rho ^{2}|\Psi|^{2}+\frac{1}{2}|\Psi|^{4}\ln\frac{|\Psi|^{2}}{\sqrt{e}}\right)\,d^ {2}r\] \[-\mu\int\Psi^{*}\Psi\,d^{2}r-\Omega\int\Psi^{*}\hat{L}\Psi\,d^{2}r. \tag{2}\]
In the above equation \(\Psi\) is normalized to the number of atoms, \(\int|\Psi|^{2}\,d^{2}r=N\). Also, \(\hat{L}\) is the operator of the angular momentum, while \(\mu\) and \(\Omega\) are Lagrange multipliers, corresponding to the conservation of the atom number and of the angular momentum, respectively.
The corresponding nonlinear equation that \(\Psi(\rho,\theta)\) satisfies is,
\[\left(-\frac{1}{2}\nabla^{2}+\frac{1}{2}\omega^{2}\rho^{2}+|\Psi|^{2}\ln| \Psi|^{2}-\Omega\hat{L}\right)\Psi=\mu\Psi. \tag{3}\]
## III Rotational behaviour of the droplet for various values of the atom number
In order to understand the rotational properties of a quantum droplet in the presence of a harmonic confining potential, first of all, let us recall that in the absence of any trapping potential the droplet carries its angular momentum via center-of-mass excitation of the ground (non-rotating) state, since this is a self-bound state [27].
For the discussion that follows it is also useful to recall that in the absence of a harmonic potential and in the Thomas-Fermi limit, we have the so-called "flat-top" droplet. The energy per particle of the droplet is, in this case,
\[\frac{E}{N}=\frac{N}{2\pi\rho_{0}^{2}}\ln\frac{N}{\sqrt{e\pi\rho_{0}^{2}}}= \frac{\bar{n}}{2}\ln\frac{\bar{n}}{\sqrt{e}}, \tag{4}\]
where we have introduced the "mean" (two-dimensional) density \(\bar{n}=N/(\pi\rho_{0}^{2})\). The value of the mean density of the droplet that minimizes the energy (which is also equal to the density of the "flat-top" droplet, assumed to be constant) is \(\bar{n}=N/(\pi\rho_{0}^{2})=1/\sqrt{e}\approx 0.607\), while the corresponding minimum energy per particle is equal to \(-1/(2\sqrt{e})\approx-0.303\).
In the presence of a harmonic potential, in addition to the size of the droplet \(\rho_{0}\) that we introduced above, we also have the oscillator length \(a_{\rm osc}=1/\sqrt{\omega}\). If the size of the droplet is much smaller than the oscillator length, \(\rho_{0}\ll a_{\rm osc}\) (i.e., for sufficiently small values of \(N\), or \(\omega\)), we still have center-of-mass excitation. We stress at this point that a unique feature of the harmonic potential is that the center-of-mass coordinate decouples from the relative coordinates, which is crucial for the results presented below [42; 43; 44]. In the opposite limit, \(\rho_{0}\gg a_{\rm osc}\) (i.e., for sufficiently large values of \(N\), or \(\omega\)), the rotational properties of the droplet are determined by the harmonic potential, where singly-quantized vortices carry the angular momentum.
Let us get an estimate about how \(N\) and \(\omega\) relate in the cross-over regime. From the expression \(\rho_{0}=[N/(\pi\sqrt{e})]^{1/2}\) that we mentioned above, which is valid in the Thomas-Fermi regime with no external potential, in order for \(\rho_{0}\) to be equal to \(a_{\rm osc}\), \(N\omega\approx\pi\sqrt{e}\approx 5.18\).
We minimized numerically the functional of Eq. (2) using the damped second-order in fictitious time method, described in Ref. [41], which is a method of constrained minimization. In the calculations that we performed, a square spatial grid was used, with \(\delta x=\delta y=0.1\). We checked that this choice of grid step size produces results that are converged with respect to the grid resolution. We stress that the actual size of the domain in the calculations was larger than shown in the figures, to avoid boundary effects.
Figure 1 shows the result of such a calculation, for the density and the phase of the order parameter, as well as for the energy \(E(L)\), with \(\omega=0.05\) and \(N=50\), i.e., \(N\omega=2.5\). Fitting the energy with a quadratic polynomial, we find that
\[E(L)\approx-10.6375+0.050002\,L+6.378\times 10^{-8}\,L^{2}. \tag{5}\]
Both from the density, as well as from the dispersion relation, it is clear that we have center-of-mass excitation of the droplet for these values of \(\omega\) and \(N\). The constant term \(-10.6375\) in Eq. (5) is the energy of the non-rotating state. Equation (4) gives a total energy which is \(\approx-15.1633\). This, combined with the zero-point energy of the harmonic potential in two dimensions, i.e., \(N\omega\), gives \(-12.6633\). This number deviates from the numerical result \(-10.6375\) and is lower due to the fact that for \(N=50\) the system has not yet reached the Thomas-Fermi limit and the (neglected) kinetic energy is not negligible. Turning to the term which is linear in \(L\) in Eq. (5), this is due to the harmonic potential, while the term which is quadratic in \(L\) is negligible. In other words, the more general result for \(E(L)\) is, in this regime,
\[E(L)=E_{\rm COM}(L)=E(L=0)+L\omega. \tag{6}\]
We stress that Eq. (6) provides an upper bound for the energy, for any value of \(N\) and \(L\), as we explain in more detail below.
For fixed \(\omega\) and larger values of \(N\) the size of the droplet becomes comparable with \(a_{\rm osc}\), \(\rho_{0}\approx a_{\rm osc}\). In this case the droplet starts to get "squeezed" due to the trapping potential. Thus, the trapping potential tends to increase the mean value of the density of the droplet, \(\bar{n}\). This, in turn, increases the ener
Figure 2: (Colour online) Upper plots: The density and the phase of the droplet order parameter, in the lowest-energy state, for \(N=100\), \(\omega=0.05\), and \(L/N=0.0,0.6,10\), and \(3.0\). Lower plot: Solid line, with data points: The corresponding dispersion relation, in the rotating frame, i.e., \(E_{\rm rot}(L/N)-E(L/N=0)\) as function of \(L/N\), with \(\Omega=0.051\). Dashed line: Same as above for the center-of-mass excitation of the non-rotating state.
Figure 1: (Colour online) Upper plots: The density and the phase of the droplet order parameter, in the lowest-energy state, for \(N=50\), \(\omega=0.05\), and \(L/N=0.0\) and \(1.0\). Lower plot: The corresponding dispersion relation, i.e., \(E=E(L/N)\).
term, too [see Eq. (4)]. In the presence of a vortex state \(\bar{n}\) drops and therefore a vortex state may be energetically favourable. Indeed, as we have also seen numerically, as \(N\), or as \(\omega\), increase, we have vortex, rather than center-of-mass excitation of the droplet.
Such an example is shown in Fig. 2, where \(N=100\) and \(\omega=0.05\), i.e., \(N\omega=5\). Here we see that for small values of \(L\) the axial symmetry of the droplet is distorted. This is due to the fact that two vortices approach the droplet from opposite sides, with one being further away from the trap center than the other. Eventually, when \(L=N\) the vortex state that is closer moves to the center of the trap and the density of the droplet becomes axially-symmetric. For even larger values of \(L\), \(L>N\), however, instead of more vortices entering the cloud, the extra angular momentum is carried via center-of-mass ex
Figure 3: (Cont.) (Colour online) Upper plots: The density and the phase of the droplet order parameter, in the lowest-energy state, for \(N=200\), \(\omega=0.05\), and \(L/N=2.0,2.6\), and \(3.0\). Lower plot: Solid line, with data points: The corresponding dispersion relation, in the rotating frame i.e., \(E_{\rm rot}(L/N)-E(L/N=0)\) as function of \(L/N\), with \(\Omega=0.051\). Dashed line: Same as above for the center-of-mass excitation of the non-rotating state.
Figure 3: (Cont.) (Colour online) The density and the phase of the droplet order parameter, in the lowest-energy state, for \(N=200\), \(\omega=0.05\), and \(L/N=2.0,2.6\), and \(3.0\). Lower plot: Solid line, with data points: The corresponding dispersion relation, in the rotating frame i.e., \(E_{\rm rot}(L/N)-E(L/N=0)\) as function of \(L/N\), with \(\Omega=0.051\). Dashed line: Same as above for the center-of-mass excitation of the non-rotating state.
citation of the state with \(L=N\), i.e., the state with one vortex state located at the center of the droplet. This is in sharp contrast with the case of contact interactions. It is a generic result and is one of the novel aspects of the present study.
The corresponding dispersion relation is also shown in Fig. 2. Instead of plotting it in the lab frame, we choose to plot it in the rotating frame (in this plot and in all the other plots of the dispersion relation that follow below), because its structure is more clearly visible. More specifically, we plot \(E_{\rm rot}(L/N)-E(L/N=0)\), where \(E_{\rm rot}(L/N)=E(L/N)-L\Omega\), with \(\Omega=0.051\) (i.e., we choose a slightly larger value of \(\Omega\) than \(\omega=0.05\)). When \(L>N\), we see that the dispersion relation becomes linear, as expected, since the nonlinear term of the energy is unaffected by the angular momentum in this range of \(L\) (simply because the shape of the droplet does not depend on \(L\) in this range of \(L\)).
To get a more quantitative description of the transition from center-of-mass to vortex excitation, let us consider the eigenfunctions \(\phi_{m}(\rho,\theta)\) of the two lowest-Landau levels as trial order parameters for the ground, non-rotating state (where \(L=0\)), assuming that the oscillator length \(a_{\rm osc}\) is equal to \(\rho_{0}\),
\[\phi_{0}=\frac{\sqrt{N}}{\sqrt{\pi}\rho_{0}}e^{-\rho^{2}/(2\rho_{0}^{2})}, \tag{7}\]
and for the state with one singly-quantized vortex (where \(L=N\)),
\[\phi_{1}=\frac{\sqrt{N}}{\sqrt{\pi}\rho_{0}^{2}}\rho e^{i\theta}e^{-\rho^{2}/ (2\rho_{0}^{2})}. \tag{8}\]
Evaluating the energy due to the nonlinear term,
\[E_{{\rm int},i}=\frac{1}{2}\int|\phi_{i}|^{4}\ln\frac{|\phi_{i}|^{2}}{\sqrt{e} }\,d^{2}r, \tag{9}\]
for the state \(\phi_{0}\) we find
\[\frac{E_{{\rm int},0}}{N}=\frac{N}{4\pi\rho_{0}^{2}}\left(\ln\frac{N}{\pi \sqrt{e}\rho_{0}^{2}}-\frac{1}{2}\right)=\frac{\bar{n}}{4}\left(\ln\frac{\bar{ n}}{\sqrt{e}}-\frac{1}{2}\right), \tag{10}\]
while for the state \(\phi_{1}\),
\[\frac{E_{{\rm int},1}}{N}\approx\frac{N}{2\pi\rho_{0}^{2}}\left( \frac{1}{4}\ln\frac{N}{\pi\sqrt{e}\rho_{0}^{2}}-\frac{3}{8}+0.057\right)\] \[=\frac{\bar{n}}{2}\left(\frac{1}{4}\ln\frac{\bar{n}}{\sqrt{e}}- \frac{3}{8}+0.057\right). \tag{11}\]
When we have center-of-mass excitation (of the state with \(L=0\)), from Eq. (6) it follows that
\[E_{\rm COM}(L=N)-E(L=0)=N\omega. \tag{12}\]
When we have vortex excitation,
\[E_{\rm vor}(L=N)-E(L=0)=N\omega+E_{{\rm int},1}-E_{{\rm int},0}. \tag{13}\]
From the last two equations, we see that it is the difference \(E_{{\rm int},1}-E_{{\rm int},0}\) which determines whether we will have center-of-mass, or vortex, excitation. It turns out that the critical value of \(N/\rho_{0}^{2}\) which gives \(E_{{\rm int},1}=E_{{\rm int},0}\) is approximately equal to 4. If \(\rho_{0}^{2}=a_{\rm osc}^{2}=1/\omega=20\), then the critical value of \(N\) is approximately 80. We stress that the calculation presented above compares the energy between the ground state and the state with one vortex located at the center of the droplet. From our numerical results it follows that, for \(\omega=0.05\), the critical number of \(N\) for the transition from center-of-mass excitation to vortex excitation is between 98.6 and 98.7.
In order to examine what happens for even larger values of \(N\), we show in Fig. 3 the result of our calculations for \(N=200\) and \(\omega=0.05\), i.e., \(N\omega=10\). We observe that for \(0<L<N\) the droplet is again distorted from axial symmetry due to the approach of a vortex from infinity. When \(L=N\) this vortex ends up again at the center of the droplet. However, here that the atom number \(N\) is larger, for \(L>N\) a second vortex enters the system, and eventually a two-fold symmetric state forms. Here it is only for \(L/N\) larger than \(\approx 2.6\) that the droplet carries its additional angular momentum via center-of-mass excitation, i.e., via a "mixed" state. The dispersion relation (in the rotating frame), which is also shown in Fig. 3, becomes linear again, now for \(L/N\) exceeding \(\approx 2.6\).
In Fig. 4 we have considered an even larger value of \(N=270\), with \(\omega\) still being equal to 0.05 (\(N\omega=13.5\)). Clearly the mean density of the non-rotating droplet also increases. As a result, we observe up to four vortices which are energetically favourable, before the "mixed" state, i.e., the center-of-mass excitation of this state with four vortices, becomes the state of lowest energy, for \(L/N\) exceeding \(\approx 3.4\).
Up to now all our results have been derived for fixed \(L\). From the dispersion relation, one may also evaluate the angular momentum of the droplet if \(\Omega\) is fixed, instead. Figure 5 shows \(L/N=L/N(\Omega/\omega)\), for \(N=100\), 200, and 270, with \(\omega=0.05\) (the steps in the angular momentum per particle \(L/N\) that we used to produce this plot were equal to 0.2). In this plot we see the usual plateaus, also known in the case of single-component condensates with an effectively repulsive contact interaction. We stress that for \(\Omega\to\omega^{-}\), this plot diverges, as we argue in the following section [see Eq. (17) and the relevant discussion].
## IV General picture and limit of rapid rotation
From the examples presented above, and other cases that we have investigated, one may get the more general picture that emerges in this system. For sufficiently small \(N\) (when \(\rho_{0}\ll a_{\rm osc}\)) we have center-of mass-excitation of the non-rotating ground state for all values of \(L\). For larger values of \(N\), where \(\rho_{0}\gtrsim a_{\rm osc}\), with increasing \(L\) one, or more vortices enter the cloud. However, there is a limit to this. As the number of vortices increases, \(\bar{n}\) drops. Decreasing \(\bar{n}\) even further, is not energetically favourable. As a result, if \(L\) increases further, the additional angular momentum is carried via center-of-mass
excitation of some "mixed" state. The dispersion relation also becomes a straight line beyond this specific value of \(L\).
One estimate for the maximum number of vortices \(N_{v}\) that the droplet accommodates before it turns to center-of-mass excitation is that the mean density is equal to the one that minimizes the energy of Eq. (4), i.e., \(\bar{n}=1/\sqrt{e}\),
\[\frac{N}{S-N_{v}\sigma}=\frac{1}{\sqrt{e}}. \tag{14}\]
Here \(S\) and \(\sigma\) are the "surfaces" of the droplet and of each vortex, respectively. An approximate expression for \(\sigma\) is \(\sigma\approx\pi\xi^{2}\), where the coherence length \(\xi\) gives roughly the linear size of the vortex.
According to the analysis presented above, one may also make a general statement about the dispersion relation. For any two states with angular momentum
Figure 4: (Cont.) (Colour online) The density and the phase of the droplet order parameter, in the lowest-energy droplet order parameter, in the lowest-energy state, for \(N=270\), \(\omega=0.05\), and \(L/N=3.0,3.4\), and \(5.0\). Lower plot: Solid line, with data points: The corresponding dispersion relation, in the rotating frame, i.e., \(E_{\rm rot}(L/N)-E(L/N=0)\) as function of \(L/N\), with \(\Omega=0.051\). Dashed line: Same as above for the center-of-mass excitation of the non-rotating state.
Figure 4: (Cont.) (Colour online) Upper plots: The density and the phase of the droplet order parameter, in the lowest-energy state, for \(N=270\), \(\omega=0.05\), and \(L/N=3.0,3.4\), and \(5.0\). Lower plot: Solid line, with data points: The corresponding dispersion relation, in the rotating frame, i.e., \(E_{\rm rot}(L/N)-E(L/N=0)\) as function of \(L/N\), with \(\Omega=0.051\). Dashed line: Same as above for the center-of-mass excitation of the non-rotating state.
and \(L_{2}\), with \(L_{1}<L_{2}\), \(E(L_{2})\) has to be lower than \(E(L_{1})+(L_{2}-L_{1})\omega\),
\[E(L_{2})<E(L_{1})+(L_{2}-L_{1})\omega. \tag{15}\]
If this inequality is violated, one may always start with the state of angular momentum \(L_{1}\) and excite it via center-of-mass excitation to a state with angular momentum \(L_{2}\). In this case, \(E(L_{2})\) will be equal to \(E(L_{1})+(L_{2}-L_{1})\omega\). From Eq. (15) it also follows that, for \(L_{2}\to L_{1}\),
\[\frac{dE(L)}{dL}<\omega, \tag{16}\]
i.e., the slope of the dispersion relation cannot exceed \(\omega\).
Another consequence of Eq. (15) is that, if one works with a fixed rotational frequency of the trap \(\Omega\) and not with a fixed angular momentum, \(\Omega\) cannot exceed \(\omega\). Indeed, according to Eq. (15),
\[E_{\rm rot}(L_{2})<E_{\rm rot}(L_{1})+(L_{2}-L_{1})(\omega-\Omega). \tag{17}\]
Therefore, if \(\Omega\geq\omega\), \(E_{\rm rot}(L_{2})<E_{\rm rot}(L_{1})\) and \(E_{\rm rot}(L)\) is a decreasing function of \(L\). In other words, if \(\Omega\) exceeds \(\omega\), then the energy is unbounded. This result is a combined effect of the "mixed" state that we have seen, with the centrifugal force, which gives rise to the effective potential \(M(\omega^{2}-\Omega^{2})\rho^{2}/2\). Last but not least, we stress that this result is also true in the case of contact interactions, in a harmonic trapping potential.
## V Excitation spectrum
All the states that we have presented so far are the ones of lowest energy, for a fixed \(N\) and \(L\). Although this is one of the most important questions, a separate question is the excitation spectrum. We should stress that the excitation spectrum is not only interesting theoretically, but is also experimentally relevant. While we have not made a complete study of the excited states, we have managed to find at least part of them. Interestingly enough, the arguments presented in Sec. IV allow us to get a rather easy understanding of this problem and to even predict the existence of the states that we have identified.
In the results which are presented below, we have focused on the case \(N=200\) and \(\omega=0.05\) and we have identified two classes of states in the excitation spectrum. The first class includes multiply-quantized vortex states, of the form \(\Psi_{S}(\rho,\theta)=f(\rho)e^{iS\theta}\), where \(S\) is the winding number, which have an axially-symmetric density distribution. These are solutions of the equation
\[-\frac{1}{2}\frac{\partial^{2}f}{\partial\rho^{2}}-\frac{1}{2 \rho}\frac{\partial f}{\partial\rho}+\frac{S^{2}}{2\rho^{2}}f+\frac{1}{2} \omega^{2}\rho^{2}f\] \[+|f|^{2}\ln|f|^{2}f=\mu f. \tag{18}\]
Starting with \(L/N=S=2\), we have found that this doubly-quantized vortex state, \(\Psi_{S=2}\), is very close in energy with the actual state of lowest energy, as shown in Fig. 6. This proximity is not a surprise, but rather is
Figure 5: The functions \(L/N=L/N(\Omega/\omega)\), derived from the lowest-energy states, for \(N=100\) (black, dashed curve), 200 (black, solid curve), and 270 (grey, dashed curve), with \(\omega=0.05\).
Figure 6: (Colour online) Upper plots: The density and the phase of the droplet order parameter, in the excited, multiply-quantized vortex state(s) \(\Psi_{S=2}\), for \(N=200\), \(\omega=0.05\), and \(L/N=2.0\) and 3.0. Lower plot: Solid line, with data points: The corresponding dispersion relation, in the rotating frame, i.e., \(E_{\rm rot}(L/N)-E(L/N=0)\) as function of \(L/N\), with \(\Omega=0.051\). Dashed line: Same as above for the lowest-energy state.
expected, i.e., it is due to the fact that the mean densities of the two states are very close to each other. For \(L/N>2\), we then have center-of-mass excitation of the doubly-quantized vortex state, with an energy which increases linearly with the angular momentum, as we saw earlier. Clearly what we described for \(L/N\geq 2\) is general. For example, the state \(\Psi_{S=3}\) is also present in the excitation spectrum for all values of \(L/N\geq 3\), etc.
The multiply-quantized vortex states described above have an axially-symmetric density distribution. The second class of states that we have identified in the excitation spectrum, are states which break the axial symmetry of the problem. In this case the centrifugal term [i.e., the third term on the left in Eq. (18)] favours an axially-asymmetric density distribution. As a result, the cloud "localizes", since this is energetically more favourable (in order, again, for the droplet to achieve the optimal mean density). Examples of such excited states are shown in Fig. 7, as well as the corresponding energy.
## VI Experimental relevance of our results
If one returns to the physical units, the actual atom number which corresponds to Eq. (2) is rescaled by the factor [9; 28]
\[N_{0}=\frac{\ln^{2}(a_{1,2}/a)}{16\pi\sqrt{e}}, \tag{19}\]
where
\[\ln(a_{1,2}/a)=\sqrt{\frac{\pi}{2}}\left(\frac{a_{z}}{a^{\rm 3D}}-\frac{a_{z} }{a_{1,2}^{\rm 3D}}\right). \tag{20}\]
Here \(a_{z}\) is the "width" of the droplet along the axis of rotation, and \(a^{\rm 3D}\), \(a_{1,2}^{\rm 3D}\) are the scattering lengths for s-wave elastic atom-atom collisions between the same and different species, respectively. For a typical value of \(a_{z}=0.1\)\(\mu\)m and \(a^{\rm 3D}=10\) nm, \(a_{1,2}^{\rm 3D}=-10.1\) nm, \(\ln(a_{1,2}/a)\approx 25\). Then, according to Eq. (19), \(N_{0}\approx 7.5\). Therefore, the range of \(N\) that we have considered (50 up to 270) corresponds roughly between \(\approx 4\times 10^{2}\) up to \(\approx 2\times 10^{3}\) atoms in an experiment.
## VII Summary of the results with a comparison with the problem of contact interactions
In the present study we investigated the rotational behaviour of a quasi-two-dimensional quantum droplet, which consists of two distinguishable Bose-Einstein condensed gases, assuming that the droplet is confined in a harmonic trapping potential.
For a fixed trap frequency and sufficiently small atom numbers, the droplet does not host any vortices, but rather it carries its angular momentum via center-of-mass excitation of its non-rotating, ground state. This is very much like the case of a single-component Bose-Einstein condensed gas, which has an effectively-attractive interatomic interaction potential and is confined in a harmonic trap. The only difference between the two problems is
Figure 7: (Colour online) Upper plots: The density and the phase of the droplet order parameter, in the excited states with an axially-asymmetric density distribution for \(N=200\), \(\omega=0.05\), and \(L/N=3.8,4.4,5.0\), and \(5.6\). Lower plot: Solid line, with data points: The corresponding dispersion relation, in the rotating frame, i.e., \(E_{\rm rot}(L/N)-E(L/N=0)\) as function of \(L/N\), with \(\Omega=0.051\). Dashed line: Same as above for the lowest-energy state.
that, while in the case of droplets we have a stable system (as a consequence of quantum fluctuations), in the case of a single component the system is metastable.
For a larger atom number, and sufficiently small values of the angular momentum, the droplet behaves in the usual way, with vortices entering it as the angular momentum increases. As more and more vortices enter the droplet, its average density drops, which is energetically favourable. However, as the number of vortices increases, eventually it is no longer energetically favourable for even more vortices to enter the droplet. As a result, beyond some critical value of the angular momentum the droplet carries the additional angular momentum via center-of-mass excitation of a vortex-carrying state.
For an effectively-attractive single-component and harmonically-trapped Bose-Einstein condensate the angular momentum is carried via center-of-mass excitation of the non-rotating state, for all values of the angular momentum. On the other hand, for an effectively-repulsive interaction this never happens (in the lowest-energy state) [45]. In the case of quantum droplets the situation is different due to a simple and important difference between the two problems. For a contact potential with an effective repulsive interaction, the interaction energy is a decreasing function of the density. On the other hand, in the case of droplets, the interaction energy is not a monotonic function of the density [see Eq. (4)], but rather it has a minimum at some specific value of the density. As a result, as \(L\) increases, in the case of a contact potential with an effective repulsive interaction, the cloud expands radially and this lowers its mean density and also the corresponding interaction energy. Eventually, the system enters the highly-correlated "Laughlin-like" regime that we mentioned in the Introduction. On the other hand, for the case of droplets, the decrease of the mean density due to the vortices (for a sufficiently large atom number) is energetically favourable only until the density reaches some finite value.
The conclusion that follows from the above discussion is the following. For increasing \(L\), in a single-component condensate the gas enters the highly-correlated Laughlin regime. On the other hand, in the case of droplets, for a sufficiently large angular momentum, a droplet is always in a "mixed" state, i.e., in a state of center-of-mass excitation of a state which includes vortices.
| Two-dimensional量子液滴の回転特性を調査し、この液滴はハロンポテンシャルで閉じ込められており、単一成分の原子ガスの状態と比較します。そのトラップ周波数に固有の値を選び、原子数に代表的な値を選び、最低エネルギー状態を決定します。角運動量が大きくなるにつれて、角度運動量を増加させます。十分に小さな原子数の場合、角運動量は中心質量励起によって伝達されます。大きい値の場合、角運動量が十分に小さい場合、渦励起を観測します。原子数の実際の値によって、1つ以上の渦が液滴に突入します。角運動量が一定の値を超えると、液滴はさらに渦を収容できません。追加の角運動量を中心質量励起によって伝達する「混在状態」に変化します。最終的には、励起スペクトルも簡潔に説明します。 |
2307.07276 | The center of the asymptotic Hecke category and unipotent character
sheaves | In 2015, Lusztig [Bull. Inst. Math. Acad. Sin. (N.S.)10(2015), no.1, 1-72]
showed that for a connected reductive group over an algebraic closure of a
finite field the associated (geometric) Hecke category admits a truncation in a
two-sided Kazhdan--Lusztig cell, making it a categorification of the asymptotic
algebra (J-ring), and that the categorical center of this "asymptotic Hecke
category" is equivalent to the category of unipotent character sheaves
supported in the cell. Subsequently, Lusztig noted that an asymptotic Hecke
category can be constructed for any finite Coxeter group using Soergel
bimodules. Lusztig conjectured that the centers of these categories are modular
tensor categories (which was then proven by Elias and Williamson) and that for
non-crystallographic finite Coxeter groups the S-matrices coincide with the
Fourier matrices that were constructed in the 1990s by Lusztig, Malle, and
Brou\'e--Malle. If the conjecture is true, the centers may be considered as
categories of "unipotent character sheaves" for non-crystallographic finite
Coxeter groups.
In this paper, we show that the conjecture is true for dihedral groups and
for some (we cannot resolve all) cells of H3 and H4. The key ingredient is the
method of H-reduction and the identification of the (reduced) asymptotic Hecke
category with known categories whose center is already known as well. We
conclude by studying the asymptotic Hecke category and its center for some
infinite Coxeter groups with a finite cell. | Liam Rogel, Ulrich Thiel | 2023-07-14T11:14:54 | http://arxiv.org/abs/2307.07276v2 | # The center of the asymptotic Hecke category and unipotent character sheaves
###### Abstract.
In 2015, Lusztig [Bull. Inst. Math. Acad. Sin. (N.S.)10(2015), no.1, 1-72] showed that for a connected reductive group over an algebraic closure of a finite field the associated (geometric) Hecke category admits a truncation in a two-sided Kazhdan-Lusztig cell, making it a categorification of the asymptotic algebra (\(J\)-ring), and that the categorical center of this "asymptotic Hecke category" is equivalent to the category of unipotent character sheaves supported in the cell. Subsequently, Lusztig noted that an asymptotic Hecke category can be constructed for any finite Coxeter group using Soergel bimodules. Lusztig conjectured that the centers of these categories are modular tensor categories (which was then proven by Elias and Williamson) and that for non-crystallographic finite Coxeter groups the \(S\)-matrices coincide with the Fourier matrices that were constructed in the 1990s by Lusztig, Malle, and Broue-Malle. If the conjecture is true, the centers may be considered as categories of "unipotent character sheaves" for non-crystallographic finite Coxeter groups.
In this paper, we show that the conjecture is true for dihedral groups and for some (we cannot resolve all) cells of \(H_{3}\) and \(H_{4}\). The key ingredient is the method of \(H\)-reduction and the identification of the (reduced) asymptotic Hecke category with known categories whose center is already known as well. We conclude by studying the asymptotic Hecke category and its center for some infinite Coxeter groups with a finite cell.
University of Kaiserslautern-Landau (RPTU), Department of Mathematics, 67653 Kaiserslautern, Germany rogel@mathematik.uni-kl.de, thiel@mathematik.uni-kl.de
## 1. Introduction
The representations of finite simple groups are a crucial ingredient in the investigation of finite symmetries. Most finite simple groups arise from finite reductive groups. An example of a reductive group is \(G\coloneqq\operatorname{SL}_{n}(\overline{\mathbb{F}}_{p})\) for a prime number \(p\); its finite variants are \(G(\mathbb{F}_{q})\coloneqq\operatorname{SL}_{n}(\mathbb{F}_{q})\) for powers \(q\) of \(p\), yielding the finite simple groups \(\operatorname{PSL}_{n}(\mathbb{F}_{q})\). An intrinsic construction produces from a reductive group \(G\) a finite group \(W\), the _Weyl group_, which controls much of the structure of \(G\). The Weyl group of \(\operatorname{SL}_{n}(\overline{\mathbb{F}}_{p})\), for example, is the symmetric group \(\mathfrak{S}_{n}\). Note that \(W\) is independent of \(p\).
Deligne-Lusztig theory [13] identifies an important subset of irreducible complex representations of finite reductive groups: the "unipotent" ones. The
## 1. Introduction
### The \(G\)-equivariant group
Let \(G\) be a connected reductive group over \(\overline{\mathbb{F}}_{p}\). Let \(\ell\neq p\) and let \(D^{b}_{G,c}(G)\) be the \(G\)-equivariant constructible bounded derived category of \(\ell\)-adic sheaves on \(G\). Lusztig's
character sheaves [40] are certain simple perverse sheaves in \(D^{b}_{G,c}(G)\). As for characters, there is a notion of unipotent character sheaves [40]. Taking the characteristic function of the Frobenius \(F\) on a unipotent character sheaf gives the corresponding unipotent almost character [57] for the finite group \(G^{F}\).1 The transition matrix \(F_{W}\) between suitably normalized unipotent almost characters and unipotent characters is the Fourier matrix from [38]. Like the parametrization of unipotent characters, it only depends on the Weyl group \(W\) of \(G\).
Footnote 1: This only holds up to scalars and requires some restrictions. We can ignore this here.
Let \(H_{W}\) be the Hecke algebra of \(W\) with parameters as in [21]. The multiplicative properties of the Kazhdan-Lusztig basis \(\{b_{w}\}_{w\in W}\) of \(H_{W}\) lead to a decomposition of \(W\) into two-sided cells [34]. Let \(\mathcal{U}_{G}\) be the subcategory of \(D^{b}_{G,c}(G)\) consisting of direct sums of unipotent character sheaves. To each unipotent character one can associate a unique two-sided cell \(c\) of \(W\)[38] and this leads to a decomposition of \(U(W)\) into subsets \(U^{c}(W)\). This categorifies by [41, 48] and leads to a decomposition of \(\mathcal{U}_{G}\) into subcategories \(\mathcal{U}_{G}^{c}\). The Fourier matrix has block diagonal form with blocks \(F_{W}^{c}\) indexed by the cells of \(W\).
Fix a Borel subgroup \(B\) of \(G\). Let \(D^{b}_{B,c}(G/B)\) be the \(B\)-equivariant constructible bounded derived category of \(\ell\)-adic sheaves on \(G/B\). This is a monoidal category with respect to convolution. The _geometric Hecke category_\(\mathcal{H}_{G}\) of \(G\) is the subcategory of \(D^{b}_{B,c}(G/B)\) consisting of semisimple perverse sheaves [60]. This is a monoidal subcategory which categorifies \(H_{W}\) in the sense that there is an algebra isomorphism
\[H_{W}\to[\mathcal{H}_{G}]_{\oplus}\;,\quad b_{s}\mapsto[B_{s}] \tag{1.1}\]
into the Grothendieck ring of \(\mathcal{H}_{G}\), see [56]. Here, \(s\in W\) is a simple reflection and \(B_{s}\) is the constant sheaf supported on \(\overline{BsB}/B\). An important fact, which relies on the decomposition theorem [2], is that under this isomorphism the Kazhdan-Lusztig basis element \(b_{w}\) for \(w\in W\) gets mapped to a (uniquely characterized) direct summand \(B_{w}\) of products of \(B_{s}\) corresponding to a reduced expression of \(w\). This mirrors the properties of the Kazhdan-Lusztig basis and the indecomposable objects \(\{B_{w}\}_{w\in W}\) of \(\mathcal{H}_{G}\) categorify the basis \(\{b_{w}\}_{w\in W}\).
Fix a cell \(c\) and let \(\mathcal{H}_{G}^{c}\) be the subcategory of \(\mathcal{H}_{G}\) consisting of sheaves supported on \(c\). This category is monoidal as well, but with respect to truncated convolution [46]. We call it the _asymptotic_ Hecke category since it is a categorification of Lusztig's asymptotic algebra [42]. It follows from [46, 6, 55] that there is a finite group \(\Gamma_{W}^{c}\), a finite \(\Gamma_{W}^{c}\)-set \(Y_{W}^{c}\), a 3-cocycle \(\omega_{W}^{c}\) on \(\Gamma_{W}^{c}\), and a monoidal equivalence
\[\mathcal{H}_{G}^{c}\simeq\operatorname{Coh}_{\Gamma_{W}^{c}}^{\omega_{W}^{c}}( Y_{W}^{c}\times Y_{W}^{c})\;, \tag{1.2}\]
the latter being the category of \(\Gamma_{W}^{c}\)-equivariant sheaves on \(Y_{W}^{c}\times Y_{W}^{c}\) with convolution as tensor product and associator \(\omega_{W}^{c}\). This description is key to
understanding the following construction more explicitly. The _Drinfeld center_ of a monoidal category \(\mathcal{C}\) is the category \(\mathcal{Z}(\mathcal{C})\) of pairs \((Z,\gamma)\), where \(Z\in\mathcal{C}\) and \(\gamma\) is a functorial isomorphism
\[\gamma_{X}\colon X\otimes Z\stackrel{{\simeq}}{{\longrightarrow}}Z \otimes X \tag{1.3}\]
for all \(X\in\mathcal{C}\) which is compatible with the associator, see [23, SS7.13]. Note that (1.3) induces a braiding on the Drinfeld center. From (1.2) one obtains
\[\mathcal{Z}(\mathcal{H}^{c}_{G})\simeq\Gamma^{c}_{W}\text{-}\text{Vec}^{ \omega^{c}_{W}}_{\Gamma^{c}_{W}} \tag{1.4}\]
as braided monoidal categories, the latter being the category of \(\Gamma^{c}_{W}\)-equivariant \(\Gamma^{c}_{W}\)-graded vector spaces, see [23, Example 8.5.4]. This is a modular tensor category [23, SS8.13], and it follows from [45] that its \(S\)-matrix (which involves the braiding) is equal to the Fourier matrix \(F^{c}_{W}\).
Lusztig [48] gave geometric meaning to \(\mathcal{Z}(\mathcal{H}^{c}_{G})\) by constructing a monoidal structure on \(\mathcal{U}^{c}_{G}\) and establishing a natural monoidal equivalence
\[\mathcal{U}^{c}_{G}\simeq\mathcal{Z}(\mathcal{H}^{c}_{G}). \tag{1.5}\]
In particular, \(\mathcal{U}^{c}_{G}\) is a modular tensor category whose \(S\)-matrix is \(F^{c}_{W}\). We note that the equivalence 1.5 seems to fit into a more general "untruncated" picture that is being established by the work of Bezrukavnikov-Finkelberg-Ostrik [7], Ben-Zvi-Nadler [5], and Bezrukavnikov-Ionov-Tolmachov-Varshavsky [8].
Let \(\mathfrak{h}\) be the root lattice of \(G\). When placing \(\mathfrak{h}\) in degree 2 of the algebra \(R\) of regular functions on \(\overline{\mathbb{Q}}_{\ell}\otimes_{\mathbb{Z}}\mathfrak{h}\), then \(R\) is as a graded algebra canonically isomorphic to \(H^{\bullet}_{B}(\operatorname{pt},\overline{\mathbb{Q}}_{\ell})\simeq R\), the total \(B\)-equivariant cohomology of \(\overline{\mathbb{Q}}_{\ell}\) on a point. By [56], taking \(B\)-equivariant total cohomology on \(G/B\) yields a fully-faithful monoidal graded functor
\[\mathcal{H}_{G}\to R\text{-}\text{gbim}\, \tag{1.6}\]
the latter category being the category of graded \(R\)-bimodules. Hence, \(\mathcal{H}_{G}\) is monoidally equivalent to a full subcategory of \(R\)-gbim: this is the category \(\mathcal{H}_{W}\) of _Soergel bimodules_ introduced by Soergel [58, 59]. The key feature of this category is that it can be constructed just from \(W\) (and a reflection representation \(\mathfrak{h}\)). Moreover, it can be defined naturally for _any_ Coxeter group (when choosing an appropriate reflection representation \(\mathfrak{h}\)) and it yields a categorification of the Hecke algebra \(H_{W}\) generalizing (1.1). It is a deep theorem by Elias and Williamson [20] that the indecomposable objects \(\{B_{w}\}_{w\in W}\) of \(\mathcal{H}_{W}\) categorify the Kazhdan-Lusztig basis as before. We should thus think of \(\mathcal{H}_{W}\) as the "Hecke category" of spetses of type \(W\). This category provides us with a kind of "categorical geometry" even if there is no reductive group.
For a two-sided cell \(c\) in a finite Coxeter group \(W\), Lusztig [48, SS10] defined an asymptotic Hecke category \(\mathcal{H}^{c}_{W}\) and a monoidal structure on it, mimicking that of the asymptotic geometric Hecke category \(\mathcal{H}^{c}_{G}\). Lusztig then took its
Drinfeld center
\[\mathcal{U}_{W}^{c}\coloneqq\mathcal{Z}(\mathcal{H}_{W}^{c})\;. \tag{1.7}\]
This should be considered as a category of "unipotent character sheaves" on the spetses of type \(W\). Consequently, it should satisfy several properties. First of all, \(\mathcal{U}_{W}^{c}\) should be a modular tensor category as conjectured by Lusztig [48, SS10]. This is indeed true and was proven by Elias-Williamson [22]. We are thus down to the following conjecture.
_Conjecture 1.1_ (Lusztig [48, SS10]).: Let \(W\) be a non-crystallographic finite Coxeter group and let \(c\) be a cell in \(W\). The \(S\)-matrix of \(\mathcal{U}_{W}^{c}\) is equal to the Fourier matrix \(F_{W}^{c}\) from [45, 51]. In particular, the number of simple objects of \(\mathcal{U}_{W}^{c}\) is equal to the number of unipotent characters supported in \(c\).
The conjecture provides a _uniform_ categorification--and thus deeper meaning--of the ad hoc constructions of unipotent characters and the Fourier transform for non-crystallographic finite Coxeter groups. Moreover, the Fourier transform matrix for the big cell for \(H_{4}\) given by Malle [51] is not yet known to be an \(S\)-matrix of a modular tensor category: the conjecture provides, for the first time, a precise candidate.
### Results in this paper
First, we note that the asymptotic Hecke category \(\mathcal{H}_{W}^{c}\) is _multi_-fusion in the language of [23], see Section 2.2. It thus has a component fusion subcategory \(\mathcal{H}_{W}^{h}\), corresponding to a diagonal \(H\)-cell \(h\) in \(c\), see Section 3.3. An elementary but crucial observation is that the centers of \(\mathcal{H}_{W}^{h}\) and \(\mathcal{H}_{W}^{c}\) are equivalent, see Equation 3.9 and the general Proposition 3.8. We can thus work with \(\mathcal{H}_{W}^{h}\), which is simpler.
We show in Section 4 (see Theorem 4.22) that Conjecture 1.1 holds for dihedral groups. The key is to identify \(\mathcal{H}_{W}^{h}\) with the even part of the Verlinde category and noticing that the fusion data of the center of the latter categories are already in the literature. To be more precise, while the asymptotic Hecke algebra can be seen directly to be isomorphic to the Grothendieck ring of the even part of the Verlinde category, written \(\operatorname{Ad}(\mathcal{C}_{n})\), we need more known results to see that the this algebra is not categorified by a different category, see Section 4.3. This allows us to compute with the asymptotic Hecke category without needing the category of Soergel bimodules. The center of \(\operatorname{Ad}(\mathcal{C}_{n})\) has also been computed in the literature but without any connections to our setting. We describe the computation, give small examples, and show how its \(S\)-matrix coincides with the Fourier matrix by Lusztig.
By similar means we confirm Conjecture 1.1 for some (we cannot resolve all) cells of \(H_{3}\) and \(H_{4}\) in Section 5. In some cells we still have two different options for the categorification and we point out which one is the "right" one assuming Conjecture 1.1 holds. Only the middle cell in type \(H_{4}\), the one of \(a\)-value \(6\), remains a complete mystery we cannot resolve yet.
Finally, we note that the asymptotic Hecke category can also be constructed for arbitrary (not necessarily finite) Coxeter groups and a finite Kazhdan-Lusztig cell. In Section 6 we study infinite Coxeter groups having a finite cell
of \(a\)-value equal to or less than \(2\). We describe the corresponding asymptotic Hecke category and its center. Even though we did not find new fusion or modular tensor categories in these examples, we expect some new examples will arise from the setting of asymptotic Hecke categories.
We begin in Section 2 with a detailed review of the construction of the asymptotic Hecke category. In Section 3 we discuss generalities about its center and summarize known results in the Weyl group case. For all except \(3\) so-called exceptional cells \(c\) in type \(E_{7}\) and \(E_{8}\) the asymptotic Hecke category is known to be of the form \(\operatorname{Coh}_{G_{c}}(X_{c}\times X_{c})\) for some group \(G_{c}\) and a \(G_{c}\)-set \(X_{c}\). The possibilities for \((G_{c},X_{c})\) are due to Lusztig and listed in Example 3.10. We show in Remark 3.11 how the Drinfeld center of the multifusion category \(\operatorname{Coh}_{G_{c}}(X_{c}\times X_{c})\) is equivalent to that of \(\operatorname{Vec}(G_{c})\) using the method of \(H\)-reduction as described in Section 3.3. The \(S\)-matrices are listed in Corollary 3.12 and we get the same matrices as the combinatorially computed results of Lusztig in [38].
### Acknowledgements
We would like to thank Ben Elias, Daniel Tubbenhauer, and Geordie Williamson for many helpful discussions on this topic. The first author further thanks Ben Elias for his hospitality during a \(2\) months stay at the University of Oregon last summer. We would furthermore like to thank Fabian Maurer for the development of software for computing the center of a fusion category [52, 53] which helped us find some key ideas. We would like to thank Gunter Malle for comments on a preliminary version of this paper. This work was supported by the SFB-TRR 195 'Symbolic Tools in Mathematics and their Application' of the German Research Foundation (DFG).
###### Contents
* 1 Introduction
* 2 The asymptotic Hecke category
* 3 The center of the asymptotic Hecke category
* 4 The dihedral case
* 5 The types \(H_{3}\) and \(H_{4}\)
* 6 Examples in infinite Coxeter groups
## 2. The asymptotic Hecke category
We describe the construction of the asymptotic, or truncated, Hecke category categorifying the asymptotic Hecke algebra associated to a two-sided Kazhdan-Lusztig cell of a Coxeter group. The main construction is due to Lusztig [48, Section 10].
We start with the construction of the asymptotic Hecke algebra and then go into detail into the construction of the asymptotic Hecke category.
### The asymptotic Hecke algebra
We use the notation of [19]. For a Coxeter system \((W,S)\) we denote by \(H_{W}\) the _(equal parameter) Hecke algebra_, a unital associative algebra over \(A\coloneqq\mathbb{Z}[v^{\pm 1}]\) generated by elements \(\delta_{w}\) for \(w\in W\) and subject to the quadratic \((\delta_{s}-v^{-1})(\delta_{s}+v)=0\) and braid relation. The Kazhdan-Lusztig basis is denoted by \(\{b_{w}\mid w\in W\}\subseteq H_{W}\), and we write
\[b_{x}=\delta_{x}+\sum_{y<x}h_{y,x}\delta_{y} \tag{2.1}\]
with the Kazhdan-Lusztig polynomials \(h_{y,x}\in v\mathbb{Z}[v]\). Furthermore, we define polynomials \(h_{x,y,z}\) in \(A\) such that
\[b_{x}b_{y}=\sum_{z\in W}h_{x,y,z}b_{z}. \tag{2.2}\]
We write \(z\leftarrow_{L}y\) if there exist an \(x\in W\) such that \(h_{x,y,z}\neq 0\). We extend this relation to a preorder \(<_{L}\) and call an equivalence class with respect to \(<_{L}\) a _left_ or _\(L\)-(Kazhdan-Lusztig) cell_. Similarly, we define \(<_{R}\) for _right_ or _\(R\)-cells_. Finally, let \(x<_{J}y\) be the extension of the relation \(x\leftarrow_{J}y\), which means \(x\leftarrow_{L}y\) or \(x\leftarrow_{R}y\), and let the equivalence classes of \(<_{J}\) be called \(J\)- or _two-sided_-cells. These relations are due to Green [29] for monoids and have been extended to algebras and categories. On \(W\) we define the _\(a\)-function_
\[a:W\to\mathbb{N}\cup\{\infty\} \tag{2.3}\]
to send \(z\in W\) to the smallest integer \(a(z)\in\mathbb{N}\) such that
\[v^{a(z)}h_{x,y,z}\in\mathbb{Z}[v]\text{ for all }x,y\in W, \tag{2.4}\]
or to infinity if no such integer exists. It is conjectured that no case with an infinite value occurs, see [9, Section 14.2 and 15]. We will only consider _bounded_ Coxeter groups, i.e. of finite \(a\)-value. For any \(x,y,z\in W\) let now
\[\gamma_{x,y,z^{-1}}\coloneqq h_{x,y,z}v^{a(z)}(0)\in\mathbb{Z}, \tag{2.5}\]
be the coefficient of the \(v^{-a(z)}\)-term in \(h_{x,y,z}\).
Using the coefficients \(\gamma_{x,y,z}\) one defines a new ring structure on the set \(\langle j_{w}\mid w\in W\rangle_{\mathbb{Z}}\), see [42]. The _asymptotic Hecke algebra_ or _\(J\)-ring_\(J\coloneqq J_{W}\) is the free abelian group generated by \(\{j_{x}\mid x\in W\}\) subject to the relations
\[j_{x}j_{y}=\sum_{z\in W}\gamma_{x,y,z^{-1}}j_{z}, \tag{2.6}\]
for all \(x,y,z\). We can say more on the properties of the ring structure.
**Definition 2.1** ([23, Section 3.1]).: Let \(R\) be a unital ring which is free as a \(\mathbb{Z}\)-module. We call \(R\) a _based ring_ if, for a fixed basis \(B=\{b_{i}\}_{i\in I}\) of \(R\), we have:
1. \(b_{i}b_{j}=\sum_{k\in I}c_{i,j}^{k}b_{k}\), for \(c_{i,j}^{k}\in\mathbb{Z}_{\geq 0}\),
2. The unit \(1\in R\) is a non-negative linear combination in the basis. We denote by \(I_{0}\) all \(b_{i}\) occurring in the decomposition of \(1\). Write \(\tau:R\to\mathbb{Z}\) for the group homomorphism sending \(b_{i}\) to \(1\) if \(i\in I_{0}\) and to \(0\) otherwise.
* There is an involution \(i\mapsto i^{*}\) on \(I\) such that the induced map \(R\to R,\ kb_{i}\mapsto kb_{i^{*}}\) is an anti-involution on \(R\) and \(\tau(b_{i}b_{j})\) is \(0\) for \(j\neq i^{*}\) and \(1\) if \(j=i^{*}\) (This means that in \(b_{i}b_{i^{*}}\) exactly one basis summand of the unit occurs exactly once)
If the basis is finite, i.e. \(R\) is of finite rank, we call it a _multifusion ring_. If furthermore \(1\in B\) we call it a _fusion ring_.
_Remark 2.2_.: For finite \(W\) we always have \(\gamma_{x,y,z}\geq 0\). If \(W\) is crystallographic this was shown in [39, Lemma 5.2(d)] and [42, 1.1(e)]. For the non-crystallographic types \(H_{3},H_{4}\) and \(I_{2}(m)\) there have been explicit calculations, see [14]. The \(J\)-ring of a finite Coxeter group is a multifusion ring, the unit element is of the form \(\sum_{t\in D}j_{t}\) for \(D\) the set of _Duflo involutions_ in \(W\). It is furthermore conjectured that the \(J\)-ring is still locally unital in general bounded Coxeter groups, i.e. the formal sum \(\sum_{t\in D}j_{t}\) acts in the same way an identity would. We refer to [47, Section 13.4, Conjecture 14.2 and Section 18.3].
_Remark 2.3_.: By [42, Corollary 1.9] we have that \(\gamma_{x,y,z}\neq 0\) implies that \(x,y,z\) lie in the same two-sided cell \(c\subset W\). The \(J\)-ring therefore decomposes into a direct sum, i.e. if we denote by \(J_{c}\coloneqq\langle j_{x}\mid x\in c\rangle\) the restriction of \(J_{W}\) to a two-sided cell \(c\subseteq W\) we have a decomposition
\[J_{W}\simeq\bigoplus_{c\subset W}J_{c}. \tag{2.7}\]
We will call \(J_{c}\) the _asymptotic Hecke algebra_ associated to \(c\). Any such summand \(J_{c}\) itself is a multifusion ring with the unit being the sum of all Duflo involutions lying in the cell \(c\).
### Construction of the asymptotic Hecke category
Let now \(\mathcal{H}_{W}\) be the category of Soergel bimodules associated to a given Hecke algebra \(H_{W}\), see [19].
In [22] Elias and Williamson showed that the monoidal product of the asymptotic Hecke category described in [48, Section 10] by Lusztig is rigid. This implies that the asymptotic Hecke category for a two-sided cell with finitely many left cells is multifusion, see Remark 3.3. For finite Weyl groups we list in Example 3.10 for which cases a description of the asymptotic Hecke category is known.
We go through their computations and motivate the construction of the asymptotic Hecke category in parallel to that of the asymptotic Hecke algebra. One key observation of Elias and Williamson is that the direct sum decomposition for Soergel bimodules is not canonical, therefore we get problems if we would naively try to define an asymptotic monoidal product by just taking the ordinary monoidal product and sending it to the lowest graded summand. Following [48, Section 10] one can define a canonical direct sum decomposition using the perverse filtration on Soergel bimodules.
**Example 2.4**.: This is seen in [22, Example 2.1]. For \(s\in W\) a reflection and \(B_{s}\in\mathcal{H}_{W}\) the Bott-Samelson bimodule corresponding to \(s\), we have
\(B_{s}\otimes B_{s}\simeq B_{s}(+1)\oplus B_{s}(-1)\). In the Hecke algebra we have accordingly \(b_{s}b_{s}=(v+v^{-1})b_{s}\). The \(a\)-value of \(s\) is \(1\) and in the \(J\)-ring this implies \(j_{s}j_{s}=j_{s}\).
One would like to find morphisms inside \(\mathcal{H}_{W}\) from \(B_{s}(-1)\) to \(B_{s}\otimes B_{s}\) and vice versa to construct a categorification of the \(J\)-ring. However, by Soergel's Hom formula, while the graded rank of the space \(\operatorname{Hom}_{\mathcal{H}_{W}}(B_{s}\otimes B_{s},B_{s}(-1))\) is \(v^{-1}+2v+v^{3}\) and therefore a projection to \(B_{s}(-1)\) is unique up to scalar, the inclusion is not unique. Two different direct sum decompositions can be found in [19, Exercise 8.39 and 8.42].
This means that \(B_{s}(-1)\) is not a canonical subobject and one cannot directly replicate the multiplication of the \(J\)-ring on the category level.
This shows that the lowest graded summand of a Soergel bimodule is not canonical. While the multiplication in the \(J\)-ring can be defined by ignoring all higher gradings, we cannot just define a monoidal product in the same way. The main result of [22] was to show relative hard Lefschetz for Soergel bimodules, as this allows to talk about certain "canonical" submodules of Soergel bimodules. To be more precise, we call a Soergel bimodule \(B\)_perverse_ if it is isomorphic to a direct sum of Bott-Samelson bimodules without shifts. For an arbitrary Soergel bimodule \(B\) the _perverse filtration_ is of the form
\[\ldots\subset\tau_{\leq i}B\subset\tau_{\leq i+1}B\subset\ldots, \tag{2.8}\]
where \(\tau_{\leq i}B\) lies in the full subcategory of Soergel bimodules only generated by objects \(B_{x}(m)\) for \(m\geq-i\). Similarly, we consider \(B/\tau_{\leq i}B\) which lies in the full subcategory of Soergel bimodules only generated by objects \(B_{x}(m)\) for \(m<-i\). We then write \(H^{i}(B)\coloneqq(\tau_{\leq i}B/\tau_{<i}B)(i)\) for the _perverse cohomology_ of \(B\).
**Theorem 2.5** ([22, Theorem 1.2]).: _Fix a Coxeter system \((W,S)\) and let \(\mathcal{H}_{W}\) be the associated category of Soergel bimodules. If \(\rho\in R\coloneqq B_{id}\in\mathcal{H}_{W}\) is dominant regular (i.e. \(\partial_{s}(\rho)>0\) for all \(s\in S\)) and \(x,y\in W\) are arbitrary the morphism_
\[\eta:B_{x}\otimes_{R}B_{y}\to B_{x}\otimes_{R}B_{y}(2),\ b\otimes b^{\prime} \mapsto b\rho\otimes b^{\prime}=b\otimes\rho b^{\prime} \tag{2.9}\]
_induces an isomorphism_
\[\eta^{i}:H^{-i}(B_{x}\otimes_{R}B_{y})\xrightarrow{\sim}H^{i}(B_{x}\otimes_{ R}B_{y}) \tag{2.10}\]
_for all \(i\)._
For \(s\in W\) an arbitrary reflection this theorem gives then an isomorphism \(B_{s}(-1)\simeq H^{-1}(B_{s}\otimes_{R}B_{s})\simeq H^{1}(B_{s}\otimes_{R}B_{s} )\simeq B_{s}(+1)\). We can therefore use the canonical projection map to the lowest graded summand and the canonical inclusion map from the highest graded summand to define the _asymptotic Hecke category_. In general any object lying over \(j_{z}\) for a summand of \(j_{x}j_{y}\) should not only correspond to the lowest graded part of \(B_{x}B_{y}\), but also the highest. With relative hard Lefschetz we can identify these by \(\eta^{i}\). The maps corresponding to the tensor product should look like:
(2.11)
This motivates the definition of a monoidal category categorifying \(J_{c}\). The following is a compression of the construction of [22, Section 5].
**Construction 2.6**.: Fix a two-sided Kazhdan-Lusztig cell \(c\) with \(a\)-value \(i\) in a Coxeter system \((W,S)\).
* We define the subcategory \(\mathcal{H}_{W}^{\leq c}\) as the full subcategory of \(\mathcal{H}_{W}\) generated by objects \(B_{x}\) such that \(x<_{J}c\). Let \(\mathcal{I}^{c}\) denote the tensor ideal of morphisms in \(\mathcal{H}_{W}\) factoring over objects of \(\mathcal{H}_{W}^{<c}\). We define the quotient category by \((\mathcal{H}_{W}^{c})^{\prime}\coloneqq\mathcal{H}_{W}/\mathcal{I}^{c}\).
* Inside \((\mathcal{H}_{W}^{c})^{\prime}\) we restrict to the full graded additive subcategory \(\tilde{\mathcal{H}}_{W}^{c}\) generated only by objects \(B_{x}\) for \(x\in c\).
* We now enrich the grading free full subcategory \(\mathcal{H}_{W}^{c}\) of \(\tilde{\mathcal{H}}_{W}^{c}\) (i.e. the subcategory generated only by \(B_{x}\) without shifts) with a new monoidal product by using the \(i\)-th perverse cohomology: (2.12) \[B_{x}\star B_{y}\coloneqq H^{-i}(B_{x}B_{y})\in\mathcal{H}_{W}^{c}.\]
**Remark 2.7**.: The quotient construction of \((\mathcal{H}_{W}^{c})^{\prime}\) is necessary to account for the fact that in the construction of the \(J\)-ring one discards any summand of \(j_{x}j_{y}\) lying in lower cells. The perverse filtration of \(\mathcal{H}_{W}\) descends to \(\tilde{\mathcal{H}}_{W}^{c}\) and \(\mathcal{H}_{W}^{c}\) and any \(H^{-i}(B_{x}B_{y})\) contains no summands of lower cells.
For the monoidal structure on \(\mathcal{H}_{W}^{c}\) we use inclusions and projections as in (2.11). The associators afforded by the product are in general non-trivial, we will see a concrete example in type \(I_{2}(n)\) in Example 4.4.
## 3. The center of the asymptotic Hecke category
We follow the categorical notation of [23]. We recall the definition of a multifusion category and list some properties on the Drinfeld center of multifusion categories. This section will motivate that one can reduce the study of the asymptotic Hecke category of a \(J\)-cell \(c\) to that of a so called \(H\)-cell \(h\subset c\), which is considerably smaller.
### Multifusion categories
Let \(\Bbbk\) be an algebraically closed field. Outside this section we assume that all categories we consider are \(\Bbbk=\mathbb{C}\)-linear.
**Definition 3.1**.: _[_23_, Section 4.1]_ A category \(\mathcal{C}\) is _multifusion_ if it is a locally finite \(\Bbbk\)-linear abelian rigid monoidal and semisimple category, such that the bifunctor \(\otimes:\mathcal{C}\times\mathcal{C}\to\mathcal{C}\) is bilinear on morphisms and we have only a finite number of simple objects. If furthermore \(\operatorname{End}_{\mathcal{C}}(\mathds{1})\simeq\Bbbk\) for \(\mathds{1}\) the monoidal unit, we call \(\mathcal{C}\) a _fusion category_.
**Example 3.2**.: Examples for fusion categories are the category of \(G\)-graded finite dimensional \(\Bbbk\)-vector spaces \(\operatorname{Vec}(G)\coloneqq\operatorname{Vec}_{\Bbbk}(G)\) or \(\operatorname{Rep}(G)\coloneqq,\operatorname{Rep}_{\Bbbk}(G)\) the category of representations of \(G\) over \(\Bbbk\) for a finite group \(G\) if the characteristic of \(\Bbbk\) and the order of \(G\) are coprime.
_Remark 3.3_.: Let \(K(\mathcal{C})\) denote the Grothendieck ring of a multifusion category \(\mathcal{C}\). By definition, it is a multifusion ring by choosing the equivalences classes of the simple objects as basis elements.
By [22, Section 5.2] the asymptotic Hecke category is rigid and pivotal. We have seen in Remark 2.2 that the asymptotic Hecke algebra \(J_{c}\) is multifusion if \(c\) is finite. Therefore, \(\mathcal{H}^{c}_{W}\) is a multifusion category. The sum \(\bigoplus_{d\in D}B_{d}\) for \(D\) the set of Duflo involutions is then the unit of \(\mathcal{H}^{c}_{W}\).
By [23, Theorem 4.3.1] in a multifusion category \(\mathcal{C}\) the space \(\operatorname{End}_{\mathcal{C}}(\mathds{1})\) is always a semisimple algebra, i.e. isomorphic to a direct sum of finitely many copies of \(\Bbbk\). We can therefore write \(\mathds{1}=\bigoplus_{i\in I}\mathds{1}_{i}\), for \(\mathds{1}_{i}\) non-isomorphic indecomposable objects.
**Definition 3.4**.: Let \(\mathcal{C}\) be a multifusion category and let \(\mathds{1}=\bigoplus_{i\in I}\mathds{1}_{i}\) be a decomposition of the unit into indecomposable objects. For \(i,j\in I\) we define the _component subcategory_\(\mathcal{C}_{ij}\coloneqq\mathds{1}_{i}\otimes\mathcal{C}\otimes\mathds{1}_{j}\) to be the full subcategories of \(\mathcal{C}\) generated by all objects of the form \(\mathds{1}_{i}\otimes X\otimes\mathds{1}_{j}\).
As abelian categories this gives a decomposition
\[\mathcal{C}\simeq\bigoplus_{i,j\in I}\mathcal{C}_{i,j}, \tag{3.1}\]
the monoidal product maps \(\mathcal{C}_{ij}\times\mathcal{C}_{jk}\) into \(\mathcal{C}_{ik}\) and the duals of \(\mathcal{C}_{ij}\) lie in \(\mathcal{C}_{ji}\), see [23, Remark 4.3.4]. Any \(\mathcal{C}_{ii}\) is then also a fusion category with \(\mathds{1}_{i}\) as the monoidal unit. We will see in the next section that the Drinfeld center of a multifusion category is equivalent to that of the fusion subcategories. This will be applied in Section 3.3 to the asymptotic Hecke category.
### The Drinfeld center of multifusion categories
We recall the definition of the Drinfeld center, see [23, Definition 7.13.1].
**Definition 3.5**.: Let \(\mathcal{C}\) be a monoidal category. The _center_\(\mathcal{Z}(\mathcal{C})\) is a category with objects \((Z,\gamma)\) where \(Z\in\mathcal{C}\) and \(\gamma\) is a family of natural morphisms \(\gamma_{X}:X\otimes Z\to Z\otimes X\) for all \(X\in\mathcal{C}\) satisfying the hexagon axiom.
Most properties of \(\mathcal{C}\) transfer to \(\mathcal{Z}(\mathcal{C})\). For example the center is always monoidal, and it is also fusion if \(\mathcal{C}\) is, see [23, Theorem 9.3.2]. The Drinfeld center \(\mathcal{Z}(\mathcal{C})\) is a special case of a _braided monoidal category_. This is a monoidal category \(\mathcal{D}\) where every object \(X\in\mathcal{D}\) affords a braiding \(c_{X,-}:X\otimes-\to-\otimes X\) satisfying the hexagon axiom. The following definition works analogously for braided categories.
**Definition 3.6**.: Let \(\mathcal{C}\) be a fusion category and \(\mathcal{Z}(\mathcal{C})\) its Drinfeld center. For \((Z_{i},\gamma^{i})_{i}\) a complete list of all simple objects of \(\mathcal{Z}(\mathcal{C})\) we define the _\(S\)-matrix_
of \(\mathcal{Z}(\mathcal{C})\) to be
\[S\coloneqq(\operatorname{tr}(\gamma_{X_{j}}^{i}\circ\gamma_{X_{i}}^{j}))_{i,j}, \tag{3.2}\]
where \(\operatorname{tr}\) denotes the trace of an endomorphism \(f:X\to X\), i.e. the element in \(\Bbbk\) corresponding to \(f\) after applying the evaluation and coevaluation, see [23, Section 8.13].
**Example 3.7**.: Let \(G\) be a finite group. The Drinfeld center of \(\operatorname{Vec}(G)\) is completely described, see [23, Example 4.15.4]. It is \(\mathcal{Z}(\operatorname{Vec}(G))\simeq(\operatorname{Vec}(G))^{G}\), the category of \(G\)-equivariant \(G\)-graded vector spaces where \(G\) acts on \(\operatorname{Vec}(G)\) by conjugation. The simple objects are in correspondence to the set of pairs \((C,V)\), where \(C\) is a conjugacy class of \(G\) and \(V\) is a simple representation up to conjugacy of the stabilizer subgroup of \(C\).
For \(G=S_{3}\) this gives for example 8 simple objects in the center, 3 lying over the trivial conjugacy class, 3 over the conjugacy class of the 3-cycle and 2 over the conjugacy class of the 2-cycle. By [23, Example 8.13.6] the \(S\)-matrix is
\[S_{(C,V),(C^{\prime},V^{\prime})}=\frac{|G|}{|C_{G}(a)||C_{G}(a^{\prime})|} \sum_{g\in G(a,a^{\prime})}\operatorname{tr}_{V}(ga^{\prime}g^{-1}) \operatorname{tr}_{V^{\prime}}(g^{-1}ag), \tag{3.3}\]
where \(a\in C,a^{\prime}\in C^{\prime}\) and \(G(a,a^{\prime})=\{g\in G\mid aga^{\prime}g^{-1}=ga^{\prime}g^{-1}a\}\).
For a multifusion category \(\mathcal{C}\) we can even reduce the center to the center of the fusion subcategories \(\mathcal{C}_{ii}\) for \(i\in I\). We call \(\mathcal{C}\)_indecomposable_ if we cannot partition the set \(I\) into non-empty subsets \(I=J\coprod K\), such that for all \(j\in J\) and \(k\in K\) we have \(\mathcal{C}_{j,k}=0\). This means that one cannot write \(\mathcal{C}\) as a direct sum of multifusion categories. The center of a decomposable multifusion category is the direct sum of the centers of the summands, if on the other hand \(\mathcal{C}\) is indecomposable we can express the center in terms of any fusion subcategory \(\mathcal{C}_{ii}\).
**Proposition 3.8** ([35]).: _For a multifusion category \(\mathcal{C}\) with component fusion subcategories \(\mathcal{C}_{ii}\) for \(1\leq i\leq n\) we have_
\[\mathcal{Z}(\mathcal{C})\simeq\mathcal{Z}(\mathcal{C}_{ii}). \tag{3.4}\]
_Therefore, the center of an indecomposable multifusion category is fusion._
Idea of proof.: This is Theorem 2.5.1 in [35]. One can define the notion of a module category \(\mathcal{M}\) over a multifusion category. On the Grothendieck level this categorifies the notion of a module over a ring. Since inside \(\mathcal{C}\) the component subcategory \(\mathcal{C}_{ij}\) maps \(\mathcal{C}_{jk}\) into \(\mathcal{C}_{ik}\), one can regard \(\mathcal{C}_{ij}\) as a \((\mathcal{C}_{ii},\mathcal{C}_{jj})\)-bimodule category. Then the action of the component subcategories on each other extends to the following equation (here we use the Deligne's tensor products, see [23, Section 1.11]):
\[\mathcal{C}_{ij}\boxtimes_{\mathcal{C}_{jj}}\mathcal{C}_{jl}\simeq\mathcal{C} _{il} \tag{3.5}\]
as \((\mathcal{C}_{ii},\mathcal{C}_{ll})\)-bimodules. Now define the \((\mathcal{C}_{ii},\mathcal{C})\)-bimodule and \((\mathcal{C},\mathcal{C}_{ii})\)-bimodule categories \(\mathcal{M}_{i}\coloneqq\bigoplus_{j}\mathcal{C}_{ij}\) and \(\mathcal{N}_{i}\coloneqq\bigoplus_{j}\mathcal{C}_{ji}\). They are what is called _invertible_ in [35], i.e.
\[\mathcal{M}_{i}\boxtimes_{\mathcal{C}}\mathcal{N}_{i}\simeq\mathcal{C}_{ii} \tag{3.6}\]
and
\[\mathcal{N}_{i}\boxtimes_{\mathcal{C}_{ii}}\mathcal{M}_{i}\simeq\mathcal{C}. \tag{3.7}\]
Now following [35, Proposition 2.4.4] these equations show
\[\mathcal{Z}(\mathcal{C}_{ii})\simeq\mathcal{Z}(\mathcal{C}).\qed \tag{3.8}\]
We apply this result in the next section to the asymptotic Hecke category.
### \(H\)-cell reduction
We reduce the computation of the center of the asymptotic Hecke category associated to a \(J\)-cell to that of an \(H\)-cell. This process, called _\(H\)-reduction_ or Clifford-Munn-Ponizovskii theory has been applied to monoids, algebras and categories, see for example [50, Theorem 15].
Let \(W\) be a Coxeter group and \(c\subset W\) a \(J\)-cell. The decomposition of the asymptotic Hecke category \(\mathcal{H}^{c}_{W}\) into component subcategories comes from the decomposition of \(c\) into left and right cells. By Remarks 2.2 and 2.3 the monoidal unit is the direct sum of all objects lying over Duflo involutions in the \(J\)-ring: \(\mathds{1}_{\mathcal{H}^{c}_{W}}=\bigoplus_{1\leq i\leq n}B_{d_{i}}\), where \(\{d_{i}\}\) is the set of all Duflo involutions in \(W\). Let now \(c^{L}_{i}\) and \(c^{R}_{i}\) for \(1\leq i\leq n\) be a list of the left and right cells, such that \(d_{i}\in c^{L}_{i}\cap c^{R}_{i}\).
**Definition 3.9**.: We call a non-empty intersection of a left and a right cell an _\(H\)-cell_. If an \(H\)-cell contains a Duflo involution we call it _diagonal_.
Any diagonal \(H\)-cell with \(d_{i}\in h_{i}=c^{L}_{i}\cap c^{R}_{i}\subset W\) gives a component subcategory \(\mathcal{H}^{h}_{W}\coloneqq(\mathcal{H}^{c}_{W})_{ii}=B_{d_{i}}\otimes \mathcal{H}^{c}_{W}\otimes B_{d_{i}}\) of \(\mathcal{H}^{c}_{W}\). This is a fusion category and we have
\[\mathcal{Z}(\mathcal{H}^{h}_{W})\simeq\mathcal{Z}(\mathcal{H}^{c}_{W}) \tag{3.9}\]
by Proposition 3.8.
Hence, the computation of the Drinfeld center of the asymptotic Hecke category of a \(J\)-cell reduces to that of an \(H\)-cell.
### The centers of the asymptotic Hecke category for finite Weyl groups
For finite Weyl groups the asymptotic Hecke categories have been known using classical geometric results. We give an overview on the classification and describe their centers and \(S\)-matrices using \(H\)-reduction.
By [38, Chapter 4] we have an assignment of a two-sided Kazhdan-Lusztig cell \(c\) in a Weyl group to a finite group \(G_{c}\) and an embedding \(c\to M(G_{c})\), where \(M(G_{c})\) consists of tuples \((g,V)\) for \(g\in G_{c}\) unique up to conjugacy and \(V\) a simple representation of the centralizer of \(g\).
For any left cell \(c^{L}\subseteq c\) there is further an association to a subgroup \(H_{c^{L}}\leq G_{c}\) in [43] such that the asymptotic Hecke algebra \(J_{h}\) associated to the
\(H\)-cell \(h\coloneqq c^{L}\cap(c^{L})^{-1}\) is, as a based (or multifusion) ring, conjectured to be isomorphic to \(K_{G_{c}}(G_{c}/H_{c^{L}}\times G_{c}/H_{c^{L}})\), which is short for the Grothendieck ring of \(\operatorname{Coh}_{G_{c}}(G_{c}/H_{c^{L}}\times G_{c}/H_{c^{L}})\), the category of \(G_{c}\)-equivariant coherent sheaves on the set \((G_{c}/H_{c^{L}})^{2}\). Furthermore, the conjecture [43, Conjecture 3.15], is extended to the claim that the disjoint union \(X\coloneqq\coprod_{c^{L}\subset c}G_{c}/H_{c^{L}}\) gives a multifusion ring isomorphic to \(K_{G_{c}}(X\times X)\simeq J_{c}\). This was proven by Lusztig himself in the case that \(G_{c}\) is abelian. A complete proof was achieved by Bezrukavnikov, Finkelberg and Ostrik in [6, Theorem 4]. For all but three exceptions in type \(E_{7}\) and \(E_{8}\), they even showed that \(J_{c}\) is categorified by \(\operatorname{Coh}_{G_{c}}(X\times X)\) for the same \(G_{c}\)-set \(X\). We call the \(3\) exceptions the _exceptional_ cells.
The results presented above are summarized in the following example:
**Example 3.10**.: The categories \(\mathcal{H}_{W}^{h}\) for a diagonal \(H\)-cell \(h=c^{L}\cap(c^{L})^{-1}\) in a non-exceptional two-sided Kazhdan-Lusztig cell \(c\) of a finite Weyl group \(W\) are given by \(\operatorname{Coh}_{G_{c}}(G_{c}/H_{c^{L}}\times G_{c}/H_{c^{L}})\), i.e. categories of equivariant coherent sheaves on a finite set, for the following possibilities of \(G_{c}\) and \(H_{c^{L}}\):
* In type \(A_{n}\) any \(H\)-cell has size \(1\), we always have \(G_{c}=\{\star\}=H_{c^{L}}\)
* In type \(B_{n}\) the size of an \(H\)-cell is \(2^{k}\) for some \(k^{2}+k\leq n\). The groups \(G_{c}\) and \(H_{c^{L}}\) are some elementary abelian \(2\)-groups.
* In type \(D_{n}\) we have the same result as in \(B_{n}\) except that \(k^{2}\leq n\).
* In type \(E_{6}\) to \(E_{8}\) the group \(G_{c}\) is a symmetric group on at most \(5\) letters \(S_{1},\ldots,S_{5}\).
* For \(G_{c}=S_{3}\) we can have \(H_{c^{L}}\in\{S_{1},S_{2},S_{3}\}\)
* For \(G_{c}=S_{4}\) we can have \(H_{c^{L}}\in\{S_{2},S_{2}\times S_{2},S_{3},D_{4},S_{4}\}\)
* For \(G_{c}=S_{5}\) we can have \(H_{c^{L}}\in\{S_{2},S_{2}\times S_{2},S_{3},D_{4},S_{2}\times S_{3},S_{4},S_{5}\}\)
* In type \(F_{4}\) we get \(G_{c}<S_{4}\) with the same possible subgroups for \(H_{c^{L}}\) as before
* In type \(G_{2}\) we get \(G_{c}\in\{S_{1},S_{3}\}\), where for \(G_{c}=S_{3}\) only \(H_{c^{L}}=S_{2}\) occurs.
_Remark 3.11_.: We want to motivate the connection of the set \(M(G_{c})\) to the center of \(\operatorname{Coh}_{G_{c}}(X\times X)\). The categories \(\mathcal{C}\coloneqq\operatorname{Coh}_{G_{c}}(X\times X)\) are multifusion. If \(X=\cup X_{i}\) is a disjoint union into transitive \(G_{c}\)-sets \(X_{i}\), the categories \(\mathcal{C}_{ij}\coloneqq\operatorname{Coh}_{G_{c}}(X_{i}\times X_{j})\) are component subcategories. By Proposition 3.8 the centers of \(\mathcal{C}_{ii}\) and \(\mathcal{C}\) are equivalent.
If one chooses \(X=G_{c}\) we have \(\operatorname{Coh}_{G_{c}}(X\times X)\simeq\operatorname{Vec}(G_{c})\). Therefore, the center \(\mathcal{Z}(\mathcal{C})\) is equivalent to the center of the category of \(G_{c}\)-graded vector spaces. Indeed, the set \(M(G_{c})\) has exactly the same description as the simple objects of the center \(\mathcal{Z}(\operatorname{Vec}(G_{c}))\simeq(\operatorname{Vec}_{G_{c}})^{G_{c}}\) as seen in Example 3.7.
Furthermore, the \(S\)-matrix computed for \(\mathcal{Z}(\operatorname{Vec}(G))\) coincides with the pairing on \(M(G_{c})\) defined in [38, Equation 4.14.3], modulo a constant term.
The pairing is
\[\{(x,\sigma),(y,\tau)\}\coloneqq\sum_{g\in G_{c},xgyg^{-1}=gyg^{-1}x}\frac{ \operatorname{tr}(g^{-1}x^{-1}g,\tau)\operatorname{tr}(gyg^{-1},\sigma)}{|C_{G_ {c}}(x)||C_{G_{c}}(y)|}, \tag{3.10}\]
which is exactly the \(S\)-matrix of the Drinfeld center divided by \(|G|\). The factor \(|G|\) is equal to the square root of the categorical dimension of \(\mathcal{Z}(\operatorname{Vec}(G_{c}))\), hence the difference in formulas comes only from a convention on normalization. We will refer to the \(S\)-matrix divided by the square root of the categorical dimension as _normalized_, see [23, Section 8.14].
As the center of a monoidal category is itself monoidal we have a multiplication on \(\mathcal{Z}(\operatorname{Vec}(G_{c}))\), while we have no direct way to define a multiplication on \(M(G_{c})\). In [27, Example 7.2] Geck and Malle worked out a possible multiplication table for \(M(G_{c})\) in type \(G_{2}\), in which case we have \(G_{c}=S_{3}\). The monoidal product on \(\mathcal{Z}(\operatorname{Vec}(S_{3}))\) coincides with the table given by Geck and Malle.
**Corollary 3.12**.: _Let \(c\) be a non-exceptional two-sided Kazhdan-Lusztig cell in a finite Weyl group \(W\). The asymptotic Hecke category associated to \(c\) as well as the \(S\)-matrix of its center is one of the following cases:_
* _For any_ \(c\) _where a diagonal_ \(H\)_-cell has size_ \(1\) _we have_ \(\mathcal{H}^{c}_{W}=\operatorname{Coh}(X\times X)\) _where_ \(X\) _has the same cardinality as the number of left and right cells in_ \(c\)_. We have_ \(\mathcal{H}^{h}_{W}\simeq\operatorname{Coh}(\star)\simeq\operatorname{Vec}\) _for any diagonal_ \(H\)_-cell. The center_ \(\mathcal{Z}(\mathcal{H}^{c}_{W})\simeq\mathcal{Z}(\mathcal{H}^{h}_{W})\simeq \operatorname{Vec}\) _has size_ \(1\) _and the_ \(S\)_-matrix is_ (3.11) \[S_{c}=\left(1\right).\] _This happens for any cell in type_ \(A_{n}\) _and also for all cells containing only the trivial element. More examples of cells can be found in_ _[_49_, Section 8]___
* _If the asymptotic Hecke category of_ \(c\) _is isomorphic to_ \(\operatorname{Coh}_{G}(X\times X)\) _for an elementary abelian_ \(2\)_-group, i.e._ \(G_{c}\simeq(\mathbb{Z}/2\mathbb{Z})^{k}\)_, we have_ \(\mathcal{Z}(\mathcal{H}^{c}_{W})\simeq\mathcal{Z}(\operatorname{Vec}(G_{c})) \simeq\bigoplus_{1\leq i\leq k}\mathcal{Z}(\operatorname{Vec}(\mathbb{Z}/2 \mathbb{Z}))\)_. The center then contains_ \(4^{k}\) _simple objects and the_ \(S\)_-matrix is the_ \(k\)_-fold Kronecker product of the_ \(S\)_-matrix of_ \(\mathcal{Z}(\operatorname{Vec}(\mathbb{Z}/2\mathbb{Z}))\)_, which is_ (3.12) \[S(\mathcal{Z}(\operatorname{Vec}(\mathbb{Z}/2\mathbb{Z})))=\begin{pmatrix}1&1& 1&1\\ 1&1&-1&-1\\ 1&-1&1&-1\\ 1&-1&-1&1\end{pmatrix}.\] _Since the dimension of_ \(\operatorname{Vec}(\mathbb{Z}/2\mathbb{Z})\) _is_ \(2\) _the normalization agrees with the table in_ _[_38_, Section 4.15]___._
* _If the asymptotic Hecke category of_ \(c\) _is isomorphic to_ \(\operatorname{Coh}_{G}(X\times X)\) _with_ \(G=S_{3}\) _the center of the asymptotic Hecke category is_ \(\mathcal{Z}(\operatorname{Vec}(S_{3}))\)
_which has_ \(8\) _simple objects and the_ \(S\)_-matrix is_ (3.13) \[S(\mathcal{Z}(\operatorname{Vec}(S_{3})))=\begin{pmatrix}4&2&2&0&0&-2&-2&2\\ 2&1&1&-3&-3&2&2&2\\ 2&1&1&3&3&2&2&2\\ 0&-3&3&3&-3&0&0&0\\ 0&-3&3&-3&3&0&0&0\\ -2&2&2&0&0&4&-2&-2\\ -2&2&2&0&0&-2&-2&4\\ -2&2&2&0&0&-2&4&-2\end{pmatrix}.\] _Normalization by the dimension of_ \(\dim(\operatorname{Vec}(S_{3}))=6\) _gives the table of_ _[_38_, Section 4.15]__. Note that some rows have been left out in that source, they are permutations of some rows given._
* _If the asymptotic Hecke category of_ \(c\) _is isomorphic to_ \(\operatorname{Coh}_{G}(X\times X)\) _with_ \(G=S_{4}\) _the center of the asymptotic Hecke category is_ \(\mathcal{Z}(\operatorname{Vec}(S_{4}))\) _which has_ \(21\) _simple objects. To count this we have to compute all centralizer subgroups and count their irreducible representations. The matrix can also be found in_ _[_38_, Section 4.15]__._
* _If the asymptotic Hecke category of_ \(c\) _is isomorphic to_ \(\operatorname{Coh}_{G}(X\times X)\) _with_ \(G=S_{5}\) _the center of the asymptotic Hecke category is_ \(\mathcal{Z}(\operatorname{Vec}(S_{5}))\) _which has_ \(39\) _simple objects, again see_ _[_38_, Section 4.15]__._
### The exceptional cells in Weyl groups
In the three exceptional cases in type \(E_{7}\) and \(E_{8}\) we have a categorification of \(\mathcal{H}_{W}^{c}\) by [55, Theorem 1.1]_:
**Theorem 3.13**.: _For an exceptional cell \(c\) in type \(E_{7}\) or \(E_{8}\), there is a tensor equivalence \(\mathcal{H}_{W}^{c}\simeq\operatorname{Vec}^{\omega}(\mathbb{Z}/2\mathbb{Z}) \boxtimes\operatorname{Coh}(Y^{\prime}\times Y^{\prime})\)._
Note, that the category \(\mathcal{H}_{W}^{c}\) is denoted by \(\mathcal{P}_{c}\) in [55]. The set \(Y^{\prime}\) has cardinality \(512\) for the exceptional cell in type \(E_{7}\) and \(4096\) for the two exceptional cells in type \(E_{8}\). The cardinality of the set \(Y^{\prime}\) gives the number of left or right cells in \(c\), the \(H\)-cells have only size \(2\) and are therefore categorified by \(\operatorname{Vec}^{\omega}(\mathbb{Z}/2\mathbb{Z})\), where \(\omega\) denotes the non-trivial twist.
**Corollary 3.14**.: _Let \(c\subset W\) be an exceptional cell in type \(E_{7}\) or \(E_{8}\). The center of the asymptotic Hecke category associated to \(c\) is \(\mathcal{Z}(\mathcal{H}^{c})\simeq\mathcal{Z}(\operatorname{Vec}^{\omega}( \mathbb{Z}/2\mathbb{Z}))\) for \(\omega\) a non-trivial twist. We have \(4\) simples in \(\mathcal{Z}(\mathcal{H}_{W}^{c})\) and the \(S\)-matrix is_
\[S(\mathcal{Z}(\operatorname{Vec}^{\omega}(\mathbb{Z}/2\mathbb{Z})))=\begin{pmatrix} 1&1&1&1\\ 1&1&-1&-1\\ 1&-1&-1&1\\ 1&-1&1&-1\end{pmatrix}. \tag{3.14}\]
## 4. The dihedral case
We give a complete description of the asymptotic Hecke category associated to a dihedral group. We will see that the category is known in the literature
as the even or adjoint part of the Verlinde category. The Drinfeld center of the Verlinde category and its adjoint are also known, we give the complete fusion data. The computations done in this section were supported by parallel works presented in [52, 53].
Let \(W=\langle r,s\rangle\) be the Coxeter group of type \(I_{2}(n)\), i.e. \(r^{2}=s^{2}=(rs)^{n}=1\).
### The asymptotic Hecke algebra for dihedral groups
All data on \(h_{x,y,z}\) and the asymptotic Hecke algebra are known, see for example [14, Section 4]. We have always three two-sided cells for \(n\geq 3\).
The neutral element always forms its own two-sided cell \(c_{0}=\{1\}\) as \(x\leq_{K}1\) for all \(x\in W\) and \(K\in\{L,R,J\}\) since \(b_{x}=b_{1}b_{x}b_{1}\). The \(a\)-value is \(0\). Similarly, the longest word \(c_{n}=\{w_{0}\}\) for \(w_{0}=\underbrace{sts\ldots}_{n\text{ times}}\) forms its own two-sided cell of \(a\)-value \(n\) as \(b_{x}b_{w_{0}}=pb_{w_{0}}\) for some polynomial \(p\in\mathbb{Z}[v^{\pm 1}]\). Furthermore, any non-trivial word that has a unique reduced expression lies in the two-sided cell of \(a\)-value \(1\). These are all remaining elements \(c_{1}=\{s,st,sts,\ldots,t,ts,tst,\ldots\}\). The left and right cells are characterized by the right and left descending sets. We can visualize the cell structure in a box diagram, where the big boxes correspond to \(J\)-cells, columns to \(R\)-cells, rows to \(L\)-cells and small boxes to \(H\)-cells.
\[\begin{array}{|c|c|}\hline 1&\\ \hline&\\ \hline s,sts,sts,\ldots&ts,tsts,\ldots\\ \hline st,stst,\ldots&t,tst,\ldots\\ \hline\end{array} \tag{4.1}\]
\(w_{0}\)
The multiplication table of the \(J\)-ring can also be found in [14, Section 4]. The coefficients \(\gamma_{x,y,z}\) are either \(0\) or \(1\). Denote by \(s_{k}\) the unique word of length \(k\) starting in \(s\) and by \(t_{l}\) the unique word of length \(l\) starting with \(t\) for \(k,l<n\). The multiplication in the \(J\)-ring is then:
\[j_{s_{k}}j_{a_{l}}=\begin{cases}0&\text{if $k$ even and $a=s$ or $k$ odd and $a=t$}\\ \sum_{u=\max\{0,k+l-n\}}^{\min\{k,l\}-1}j_{s_{k+l-1-2u}}&\text{otherwise.}\end{cases} \tag{4.2}\]
We can read off directly that the neutral element is \(j_{s}+j_{t}\). In type \(I_{2}(5)\) this gives for example:
\[\begin{array}{c|cccc}\cdot&j_{s}&j_{st}&j_{sts}&j_{stst}\\ \hline j_{s}&j_{s}&j_{st}&j_{sts}&j_{stst}\\ j_{ts}&j_{ts}&j_{t}+j_{stst}&j_{ts}+j_{sts}&j_{stst}\\ j_{sts}&j_{sts}&j_{stst}&j_{ts}&j_{st}\\ \hline\end{array} \tag{4.3}\]
_Remark 4.1_ (Verlinde fusion rings).: The structure of the multiplication is similar to the Clebsch-Gordan rule for the monoidal products of \(\operatorname{U}(\mathfrak{sl}_{2})\) representations. We see this explicitly for an \(H\)-cell.
Denote the left and right cells by \(c_{s}^{L}\coloneqq\{s,ts,sts,\ldots\}\) and \(c_{t}^{L}\coloneqq\{t,st,tst,\ldots\}\) as well as \(c_{s}^{R}\coloneqq\{s,st,sts,\ldots\}\) and \(c_{t}^{R}\coloneqq\{t,ts,tst,\ldots\}\). Then the diagonal \(H\)-cells are \(h_{s}\coloneqq c_{s}^{L}\cap c_{s}^{R}=\{s,sts,sts,\ldots\}\) and \(h_{t}\coloneqq c_{t}^{L}\cap c_{t}^{R}=\{t,tst,tstst,\ldots\}\).
Inside \(h_{s}\) for \(1\leq i,j\leq n\) and \(i+j\leq n-1\) we have by (4.2)
\[j_{s_{i}}j_{s_{j}}=j_{s_{|i-j|}}+j_{s_{|i-j|+2}}+\ldots+j_{s_{i+j}}, \tag{4.4}\]
while everytime we have \(i+j\geq n\) we truncate some of the bigger terms. We will explore these fusion rings in the next section as they give a categorification of the asymptotic Hecke algebra.
### Type \(A_{n}\)-fusion categories
**Definition 4.2**.: We say that a fusion category \(\mathcal{C}_{k}\) has fusion rules \(A_{k}\) if it has \(k\) simple objects, which we may labeled by \(X_{0},\ldots,X_{k-1}\), such that the fusion graph showing the monoidal product by \(X_{1}\) is the Dynkin diagram of type \(A_{k}\).
\[\tikzfig{A_{k}}\tikzfig{A_{k-2}}\tikzfig{A_{k-1}} \tag{4.5}\]
This means, that \(X_{1}\otimes X_{0}\simeq X_{1}\simeq X_{0}\otimes X_{1}\), \(X_{1}\otimes X_{k-1}\simeq X_{k-2}\simeq X_{k-1}\otimes X_{1}\) and \(X_{1}\otimes X_{i}\simeq X_{i-1}\oplus X_{i+1}\) for all \(1\leq i\leq k-2\).
_Remark 4.3_.: These categories are mentioned in [45, Section 2.2] under the name _Verlinde-Wess-Zumino-Witten_. An explanation can also be found in [23, Example 8.18.5].
We note, that one can inductively show that the monoidal product is a truncated version of the monoidal product of \(\mathfrak{sl}_{2}\) representations. If \(V_{i}\) are the simple representations with \(V_{i}\otimes V_{j}\simeq V_{|i-j|}\oplus V_{|i-j|+2}\oplus\ldots\oplus V_{i+j}\) for any \(i,j\in\mathbb{N}\), then a specialization of Lusztigs quantum group \(\operatorname{U}_{q}(\mathfrak{sl}_{2})\) at a root of unity nullifies or truncates certain summands. This happens exactly when the quantum number corresponding to the root of unity is zero.
For example in a fusion category of type \(A_{3}\) we have \(X_{1}\otimes X_{2}\simeq X_{1}\), while for \(\mathfrak{sl}_{2}\)-representations we would have \(V_{1}\otimes V_{2}\simeq V_{1}\oplus V_{3}\). However, in the specialization seen in the next example one would have \([3+1]=0\) and hence \(V_{3}\) does not occur.
An overview of the categorical data of fusion categories with type \(A_{k}\) fusion rules can be found in [16]. All associators have been classified by [26].
**Example 4.4**.: For any natural number \(k\) the monoidal categories with fusion rules \(A_{k}\) are classified by \(l\in\mathbb{Z}/(k+1)\mathbb{Z}\) coprime to \(k+1\). We denote them by \(\mathcal{C}_{k}^{l}\).
The associator of \(\mathcal{C}_{k}^{l}\) is defined as follows. Let \(q\coloneqq e^{\frac{l\pi i}{k+1}}\) be a \(2(k+1)\)-th root of unity, then we set the quantum numbers to be \([0]^{l}\coloneqq 0,[1]^{l}\coloneqq 1\)
\([2]^{l}\coloneqq q^{l}+q^{-l}\), and inductively \([n]^{l}\coloneqq[2]^{l}[n-1]^{l}-[n-2]^{l}\). We define the _quantum factorial_ via \([m]^{l}!\coloneqq[1]^{l}[2]^{l}\cdot\ldots\cdot[m]^{l}\).
We say that a triple of natural numbers \((a,b,c)\) all smaller than \(k\) is \(k\)_-admissible_ if
\[m\coloneqq\frac{a+b-c}{2},\ n\coloneqq\frac{a+c-b}{2},\ p\coloneqq\frac{b+c-a }{2} \tag{4.6}\]
are also natural numbers and \(a+b+c\leq 2k-2\). This is equivalent to saying that \(X_{c}\) occurs as a summand of \(X_{a}\otimes X_{b}\).
The \(6j\)-symbols have been computed by Kauffman and Lins in [33]. For fixed \((a,b,c)\) we consider all numbers \(d,e,f\) such that \(X_{d}\) is a summand of \(X_{a}\otimes X_{b}\otimes X_{c}\) and \((a,b,f),(c,d,f),(a,d,e)\) and \((b,c,e)\) are \(k\)-admissible. Then the \(6j\)-symbol of \((X_{a}\otimes X_{b})\otimes X_{c}\to X_{a}\otimes(X_{b}\otimes X_{c})\) for the summand \(X_{f}\) of \(X_{a}\otimes X_{b}\) and \(X_{e}\) of \(X_{b}\otimes X_{c}\) has the form
\[\begin{cases}a&b&e\\ c&d&f\end{cases}=\frac{\mathcal{I}!(-1)^{e+1}[e+1]}{\mathcal{E}!\theta(a,d,e) \theta(b,c,e)}\sum_{n\leq s\leq N}\frac{(-1)^{s}[s+1]!}{\prod_{i}{[s-a_{i}]! \prod_{j}{[b_{j}-s]!}}}, \tag{4.7}\]
where
\[\theta(a,b,c)\coloneqq\frac{(-1)^{m+n+p}[m+n+p+1]![m]![n]![p]!}{[m+n]![m+p]![n+ p]!}, \tag{4.8}\]
and
\[\mathcal{I}!\coloneqq\prod_{i,j}{[b_{j}-a_{i}]!},\ \mathcal{E}!\coloneqq[a]![b]![c ]![d]![e]![f]!, \tag{4.9}\]
where
\[a_{1}\coloneqq\frac{a+d+e}{2},\ a_{2}\coloneqq\frac{b+c+e}{2},\ a_{3} \coloneqq\frac{a+b+f}{2},\ a_{4}\coloneqq\frac{c+d+f}{2}, \tag{4.10}\]
and
\[b_{1}\coloneqq\frac{b+d+e+f}{2},\ b_{2}\coloneqq\frac{a+c+e+f}{2},\ b_{3} \coloneqq\frac{a+b+c+d}{2} \tag{4.11}\]
and \(n\) is the maximum value of \(a_{i}\) and \(N\) the minimum of \(b_{j}\). If the exact choice of root of unity is not relevant we only write \(\mathcal{C}_{k}\).
_Remark 4.5_.: The computations of the \(6j\)-symbols in [33] have been done using the Temperley-Lieb algebra. In the definition of the Temperley-Lieb algebra one needs to choose a value for the evaluation of the loop. Exactly when one chooses the quantum number \([2]=q+q^{-1}\) coming from a \((k+1)\)-th root of unity \(q=e^{\frac{m\pi i}{k+1}}\) we land in the case of type \(A_{k}\) fusion categories. For the later calculations it is not relevant what \(l\) is, everything is given in terms of quantum numbers.
### The adjoint part of type \(A_{k}\) fusion categories
In [15] the subcategory of \(\mathcal{C}_{n}\) generated by the even elements is called the _adjoint subcategory_. An explanation for this term can be found in [23, Section 3.6 and 4.14].
_Remark 4.6_.: For a based ring \(A\) with basis \(B=\{b_{i}\}\) we call the smallest subring \(A_{ad}\subset A\), such that all \(b_{i}b_{i}^{*}\) lie in \(A_{ad}\) the _adjoint subring_. For a fusion category \(\mathcal{C}\) we write \(\operatorname{Ad}(\mathcal{C})\) for the full fusion subcategory such that \(K(\operatorname{Ad}(\mathcal{C}))=K(\mathcal{C})_{ad}\) and call it the _adjoint subcategory_.
In \(\mathcal{C}_{n}\) all objects are self-dual, and any monoidal product \(X_{i}\otimes X_{i}\) decomposes into a sum of even summands \(X_{2j}\). This comes from the fact that \(\mathcal{C}_{n}\) is universally \(\mathbb{Z}/2\mathbb{Z}\)-graded in the sense of [23, Section 4.14] and the adjoint part is the trivial component of the universal grading on \(\mathcal{C}\).
While we have seen in Example 4.4 that the categories \(\mathcal{C}_{n}^{l}\) are the only categorifications of Verlinde type fusion rings, it is not clear yet that the adjoint subcategories are the only possibilities for categorifications of the adjoint fusion rings. Recent work by Etingof and Ostrik [25] shows that this is indeed the case. As a shorthand notation we write \(K_{i}\) for the Grothendieck ring of the adjoint part of \(\mathcal{C}_{2i+2}\) and \(K_{i}^{\prime}\) for the Grothendieck ring of the adjoint part of \(\mathcal{C}_{2i+1}\).
**Lemma 4.7**.: _Let \(\mathcal{C}\) be a pivotal fusion category categorifying the fusion ring \(K_{l}\) or \(K_{l^{\prime}}^{\prime}\) for \(l>2\) or \(l^{\prime}\geq 1\). Then there is a tensor equivalence \(\mathcal{C}\simeq\operatorname{Rep}(\mathfrak{so}(3)_{q})\) for \(q\) a primitive \(4(l+1)\)-th root of unity._
Proof.: This is [25, Theorem A.3 and Remark A.4(ii)]. Here \(\operatorname{Rep}(\mathfrak{so}(3)_{q})\) is the fusion category of tilting modules over the quantum enveloping algebra of \(\mathfrak{so}(3)\) specialized at the root of unity \(q\). In our notation this is the category \(\operatorname{Ad}(\mathcal{C}_{n})\).
_Remark 4.8_.: There are two exceptions to this categorification result. For \(K_{1}\) the Grothendieck ring is of the form \(K(\operatorname{Vec}(\mathbb{Z}/2\mathbb{Z}))\), which has two categorifications, for \(K_{2}\) the ring has more categorifications, see [24]. However, none of these rings appear in any cases we consider in this work. Except for the dihedral group, for which we can use Remark 4.9.
### The asymptotic Hecke category for dihedral groups
The two \(J\)-cells of size \(1\), \(c_{0}=\{1\}\) and \(c_{n}=\{w_{0}\}\), have only one possible fusion categorification, as there is only one fusion category with one object, the finite dimensional vector spaces \(\operatorname{Vec}\). The asymptotic Hecke category therefore is this trivial category, we can label its simple object by \(B_{1}\) or \(B_{w_{0}}\) depending on which cell we focus on.
For the middle cell one can do diagrammatic calculations to see that the associators coincide exactly with the ones from type \(A_{k}\) fusion categories. A small example can be seen in Example 4.11. The close connection of the diagrammatic Hecke category for dihedral groups and the Temperley-Lieb category is due to Ben Elias.
_Remark 4.9_.: By [17] the (two-colored) Temperley-Lieb category embeds as the degree \(0\) morphisms into the category of Soergel bimodules of a dihedral group. By [18, Theorem 2.15] we even have a degree-zero equivalence. This shows, that the morphism spaces in the asymptotic Hecke category are exactly described by the structure constants of recoupling theory, see Example 4.4.
We can combine this information to a description of the asymptotic Hecke category for a dihedral group.
**Proposition 4.10**.: _Let \(n\geq 3\) and consider the Coxeter group \(W\) of type \(I_{2}(n)\). Let \(c\) be the two-sided cell of \(a\)-value \(1\). The asymptotic Hecke category \(\mathcal{H}_{W}^{c}\) associated to \(c\) has the following fusion data._
* _The objects are labeled by elements of_ \(c\)_:_ \(B_{w}\) _for_ \(w\in c\)_._
* _The monoidal product is as in Equation_ 4.2_, where_ \(j_{x}\) _denotes the equivalence class of_ \(B_{x}\) _in the Grothendieck ring._
* _The associators are given by Equation_ 4.7 _where for an object_ \(B_{x}\) _we plug in the length of_ \(x\) _minus one into the_ \(6j\)_-symbol._
**Example 4.11** (Non-trivial associators).: We give one small example of a calculation showing non-trivial associators.
Let \(n=3\). For \(c=\{s,t,st,ts\}\) the tensor ideal \(\mathcal{I}_{<c}\) consists of morphisms factoring over the longest word. If one follows the construction of [19, Section 11.2.5] a morphism in \(\mathcal{I}<c\) needs to factor over the Jones-Wenzl projector corresponding to \(B_{sts}\). We will denote this idempotent by \(e_{sts}\in\operatorname{Hom}_{\mathcal{H}_{W}}(B_{s}B_{t}B_{s},B_{s}B_{t}B_{s})\). This therefore gives the \(\operatorname{Hom}_{\mathcal{H}_{W}^{c}}(B_{x},B_{y})\) to be quotients by the ideal generated by \(e_{sts}\).
In the \(J\)-ring of \(c\) we have
\[j_{st}j_{ts}=j_{s},\ j_{ts}j_{st}=j_{t}, \tag{4.12}\]
and \(j_{s}+j_{t}\) being the unit of \(J_{c}\). Now, inside \(\mathcal{H}_{W}\) we have
\[\operatorname{rk}(\operatorname{Hom}_{\mathcal{H}_{W}}(B_{st}B_{ts}B_{st},B_{ st}))=2v^{-2}+9+\dots, \tag{4.13}\]
therefore the projection is not unique. One can either use the unique projection (of the asymptotic category) for the first two terms and then the third, or firstly for the last two terms and then the first. However, we compute
\[\operatorname{rk}(\operatorname{Hom}_{\mathcal{H}_{W}}(B_{st}B_{ts}B_{st},B_ {sts}))=v^{-3}+6v^{-1}+\dots \tag{4.14}\]
and
\[\operatorname{rk}(\operatorname{Hom}_{\mathcal{H}_{W}}(B_{sts},B_{st}))=v+2v ^{3}+\dots, \tag{4.15}\]
hence, by the quotient construction, we get an up to scalar unique map of \(\operatorname{Hom}_{\mathcal{H}_{W}^{c}}(B_{st}B_{ts}B_{st},B_{st})\). The scalar itself can be computed out of \(e_{sts}\). This idempotent is of the form \(\operatorname{id}_{B_{s}B_{t}B_{s}}=e_{sts}+f\), where \(f\) is an idempotent corresponding to the summand \(B_{s}\stackrel{{\oplus}}{{\subset}}B_{s}B_{s}B_{t}\), and one will therefore get the term \(-1\) for the associator. This is the exact value one gets following Example 4.4.
### The center of type \(A_{n}\)-fusion categories
We can now investigate the center of the asymptotic Hecke category of the dihedral group by considering the categories \(\operatorname{Ad}(\mathcal{C}_{n})\).
First, we describe the Drinfeld center of \(\mathcal{C}_{n}\). The main idea is to find a braiding on the category as this gives an equivalence to the center.
**Lemma 4.12**.: _Let \(\mathcal{C}\) be a tensor category with invertible \(S\)-matrix. The center of \(\mathcal{C}\) has the form_
\[\mathcal{Z}(\mathcal{C})\simeq\mathcal{C}\boxtimes\mathcal{C}^{rev}, \tag{4.16}\]
_where \((-)^{rev}\) denotes the category \(\mathcal{C}\) with reverse braiding, i.e. \(c^{\prime}_{X,Y}=c_{Y,X}^{-1}\) for any braided object \((X,c_{X,-})\) in \(\mathcal{C}\)._
Proof.: This result is originally by Mueger, [54], see also [23, Propositions 8.6.1 and 8.20.12 ]. They show that the functors \(\mathcal{C}\to\mathcal{Z}(\mathcal{C}),\ X\mapsto(X,c_{-,X})\) and \(\mathcal{C}^{rev}\to\mathcal{Z}(\mathcal{C}),\ X\mapsto(X,c_{X,-}^{-1})\) combine into an equivalence of braided tensor functors
\[\mathcal{C}\boxtimes\mathcal{C}^{rev}\to\mathcal{Z}(\mathcal{C}). \tag{4.17}\]
Note, that the center does not depend on the braiding chosen on \(\mathcal{C}\) as long as the associated \(S\)-matrix is invertible. Hence, we can freely choose the braiding for computing the modular data of the center.
**Example 4.13**.: The categories \(\mathcal{C}_{n}\) can be endowed with a braiding. All braidings were computed by [26], see also [16] for an overview. They are classified by an integer \(l\in\mathbb{Z}/4(n+1)\mathbb{Z}\) with \((l,n+1)=1\). Remember that we defined \(\mathcal{C}_{n}\) in terms of quantum numbers \([k]\), where \([2]=q+q^{-1}\) for \(q\) a \(2(n+1)\)-th root of unity. To define a braiding we see that we need even higher roots of unity.
We choose the value \(l=1\) and set \(z\coloneqq z_{4(n+1)}\coloneqq e^{\frac{\pi i}{2(n+1)}}\) to be a \(4(n+1)\)-th root of unity, i.e. \(z_{4(n+1)}^{2}=q\). Then the braiding on \(X_{1}\otimes X_{1}\) has the form
(4.18)
Since \(X_{1}\) generates the category \(\mathcal{C}_{n}\) this equation defines all braidings on \(\mathcal{C}_{n}\) uniquely.
Lemma 4.12 tells us directly that \(\mathcal{Z}(\mathcal{C}_{n})\) has \(n^{2}\) simple objects. The object \(X_{i}\boxtimes X_{j}\) maps to a simple object \(X_{i}\otimes X_{j}\) in \(\mathcal{Z}(\mathcal{C}_{n})\) with a certain braiding coming from \(X_{i}\in\mathcal{C}_{n}\) and \(X_{j}\in\mathcal{C}_{n}^{rev}\).
The \(S\)-matrix of \(\mathcal{C}_{n}\) has been computed in [33]. The entry corresponding to the tuple \((X_{i},X_{j})\) is
\[S_{i,j}=(-1)^{i+j}[(i+1)(j+1)] \tag{4.19}\]
We denote the corresponding matrix by \(S_{n}=(S_{i,j})_{i,j}\). In the Deligne tensor product we get the \(S\)-matrix to be \(S_{n}\boxtimes S_{n}\), i.e. the Kronecker product of the matrix with itself.
**Example 4.14**.: We choose \(n=3\). In \(\mathcal{C}_{n}\) the braidings \(e_{X_{1},-}\) on the object \(X_{1}\) have then the form:
\[c_{X_{1},X_{0}}:X_{1}\to X_{1},\ 1 \tag{4.21}\] \[c_{X_{1},X_{1}}:X_{0}\oplus X_{2}\to X_{0}\oplus X_{2},\ (z_{16}^{5},z_{16}^{1})\] (4.22) \[c_{X_{1},X_{2}}:X_{1}\to X_{1},\ z_{16}^{4} \tag{4.20}\]
The braiding of \(X_{1}\) in \(\mathcal{C}_{3}^{rev}\) is just the inverse of the morphisms before, i.e.
\[c^{\prime}_{X_{1},X_{0}}:X_{1}\to X_{1},\ 1 \tag{4.24}\] \[c^{\prime}_{X_{1},X_{1}}:X_{0}\oplus X_{2}\to X_{0}\oplus X_{2}, \ (z_{16}^{11},z_{16}^{15})\] (4.25) \[c^{\prime}_{X_{1},X_{2}}:X_{1}\to X_{1},\ z_{16}^{12} \tag{4.23}\]
We can visualize the \(3^{2}=9\) simple objects of \(\mathcal{Z}(\mathcal{C}_{3})\) by arranging them into a grid:
\[\begin{array}{llll}X_{0}\boxtimes X_{0}&X_{0}\boxtimes X_{1}&X_{0}\boxtimes X _{2}&X_{0}&X_{1}&X_{2}\\ X_{1}\boxtimes X_{0}&X_{1}\boxtimes X_{1}&X_{1}\boxtimes X_{2}&X_{1}&X_{0} \oplus X_{2}&X_{1}.\\ X_{2}\boxtimes X_{0}&X_{2}\boxtimes X_{1}&X_{2}\boxtimes X_{2}&X_{2}&X_{2}&X_{ 1}&X_{0}\end{array} \tag{4.26}\]
The left side shows objects in \(\mathcal{C}_{3}\boxtimes\mathcal{C}_{3}^{rev}\), the right side depicts the corresponding object in \(\mathcal{Z}(\mathcal{C}_{3})\). We see that \(X_{1}\) occurs with \(4\) different braidings in \(\mathcal{Z}(\mathcal{C}_{3})\), while \(X_{0}\) and \(X_{2}\) only with two. Furthermore, there is a simple object \(X_{0}\oplus X_{2}\) in \(\mathcal{Z}(\mathcal{C}_{3})\), which is obviously not simple in \(\mathcal{C}_{3}\). Note also that all objects in \(\mathcal{Z}(\mathcal{C}_{3})\) are self-dual as they are self-dual in \(\mathcal{C}_{3}\) and hence also in the Deligne tensor product.
The \(S\)-matrix of \(\mathcal{C}_{3}\) is of the form
\[S_{3}=\begin{pmatrix}[1]&-[2]&[3]\\ -[2]&[4]&-[6]\\ [3]&-[6]&[9]\end{pmatrix}=\begin{pmatrix}1&-\sqrt{2}&1\\ -\sqrt{2}&0&\sqrt{2}\\ 1&\sqrt{2}&1\end{pmatrix} \tag{4.27}\]
_Remark 4.15_ (Lusztig's \(S\)-matrix).: The dihedral fusion datum by Lusztig, [45, Section 3.10] is of the following form: For \(p\geq 3\) we consider the pairs \((i,j)\) with \(0<i<j<i+j<p\) or \(0=i<j<\frac{p}{2}\), as well as two special tuples \((0,\frac{p}{2})\) and \((0,\frac{p}{2})^{\prime}\) if \(p\) is even. We then define a pairing via
\[\langle(i,j),(k,l)\rangle\coloneqq\frac{\xi^{il+jk}+\xi^{-il-jk}-\xi^{ik+jl}- \xi^{-ik-jl}}{p} \tag{4.28}\]
on non-special tuples. Here \(\xi\) is a \(p\)-th root of unity. This expression looks similar to an expression in quantum numbers, the connection has been described in [36, Section 3.4].
We set \(n\coloneqq p-1\), then the tuples \((i,j)\) correspond to the object \(X_{j-i-1}\boxtimes X_{j+i-1}\) in \(\mathcal{C}_{n}\boxtimes\mathcal{C}_{n}^{rev}\). Both special elements will correspond to two different subobjects of \(X_{\frac{n-1}{2}}\boxtimes X_{\frac{n-1}{2}}\), see Example 4.20.
For any tuple of pairs \(((i,j),(k,l))\) the \(S\)-matrix value of the corresponding entry of \((X_{j-i-1}\boxtimes X_{j+i-1},X_{k-l-1}\boxtimes X_{k+l-1})\) is then
\[(-1)^{j+k-i-l-2}[(j-i)(k-l)][(j+i)(k+l)]. \tag{4.29}\]
The quantum part of this expression then gives
\[\frac{q^{(j-i)(k-l)}-q^{-(j-i)(k-l)}}{q-q^{-1}}\frac{q^{(j+i)(k+l)}-q ^{-(j+i)(k+l)}}{q-q^{-1}} \tag{4.31}\] \[=\frac{(q^{kj-ik-lj+il}-q^{ik-kj+lj-il})(q^{jk+jl+ik+il}-q^{-jk-jl- ik-il})}{(q-q^{-1})^{2}}\] (4.32) \[=\frac{q^{2kj+2il}-q^{-2ik-2jl}-q^{2ik+2lj}+q^{-2il-2jk}}{(q-q^{-1} )^{2}}, \tag{4.30}\]
where \(q\) is a \(2(n+1)\)-th root of unity, i.e. \(q^{2}=\xi\). Indeed, this gives the result of the pairing by Lusztig modulo a term of the form \(\frac{(q-q^{-1})^{2}}{p}\), which is exactly the square root of the categorical dimension as in Remark 4.18.
### The center of \(\operatorname{Ad}(\mathcal{C}_{n})\)
Here we describe the Drinfeld center of \(\operatorname{Ad}(\mathcal{C}_{n})\) as calculated by [15]. We put it together with results of [33] to compute its \(S\)-matrix and see how the normalized \(S\)-matrix is the same matrix Lusztig computed under in [45, Section 3] under an involution, i.e. a permutation on the columns. There is a case distinction depending on the parity of \(n\).
#### 4.6.1. The case of \(n\) even
It was noted in [15, Lemma 3.1] that the braiding of \(\mathcal{C}_{n}\) restricted to the adjoint part \(\operatorname{Ad}(\mathcal{C}_{n})\) is still modular, i.e. the corresponding \(S\)-matrix is still invertible. In this case we can use Lemma 4.12 again.
**Lemma 4.16** ([15, Lemma 3.1]).: _We have_
\[\mathcal{Z}(\operatorname{Ad}(\mathcal{C}_{2n}))\simeq\operatorname{Ad}( \mathcal{C}_{2n})\boxtimes\operatorname{Ad}(\mathcal{C}_{2n}^{rev}). \tag{4.33}\]
**Example 4.17**.: For \(n=4\) the even part of \(\mathcal{C}_{n}\) is the Fibonacci category \(F\). We have two simples objects \((X_{0},X_{2})\) with monoidal product \(X_{2}\otimes X_{2}\simeq X_{0}\oplus X_{2}\) and trivial associators except for the map
\[X_{2}^{2}\to X_{2}^{2},\ \begin{pmatrix}\frac{[1]}{[3]}&-\frac{[2]^{2}}{[4]}\\ -\frac{[4]}{[2]^{2}[3]}&\frac{[6]}{[3][4]}\end{pmatrix}=\begin{pmatrix}\varphi ^{-1}&-1-\varphi\\ -\varphi^{-3}&-\varphi^{-1}\end{pmatrix}, \tag{4.34}\]
where \(\varphi=\frac{1+\sqrt{5}}{2}\) and \([n]\) are the quantum numbers with \([2]=\varphi\).
Furthermore, the \(S\)-matrix is the restriction of the \(S\)-matrix of \(\mathcal{C}_{4}\), \(S_{4}\), to the odd rows and columns:
\[S_{F}=\begin{pmatrix}[1]&[3]\\ [3]&[9]\end{pmatrix}=\begin{pmatrix}1&\varphi\\ \varphi&-1\end{pmatrix}. \tag{4.35}\]
Note, that \([9]=-[1]=-1\) and \([2]=[3]=\varphi\). This is invertible, as excepted by Lemma 4.16.
The center \(\mathcal{Z}(\operatorname{Ad}(\mathcal{C}_{4}))=\mathcal{Z}(F)\) can be visualized as the black objects in the matrix \(X_{i}\otimes X_{j}\), see Equation 4.26
\[\begin{array}{cccc}X_{0}&\cdot&X_{2}&\cdot\\ \cdot&\raisebox{-1.5pt}{$X_{0}\oplus X_{2}$}&\cdot&\raisebox{-1.5pt}{$X_{2}$}\\ X_{2}&\cdot&X_{0}\oplus X_{2}&\cdot\\ \cdot&\raisebox{-1.5pt}{$X_{2}$}&\cdot&\raisebox{-1.5pt}{$X_{0}$}\end{array} \tag{4.36}\]
This gives the \(S\)-matrix of \(\mathcal{Z}(\mathcal{F})\) to be
\[S_{F}\boxtimes S_{F}=\varphi\begin{pmatrix}\varphi^{-1}&1&1&\varphi\\ 1&-\varphi^{-1}&\varphi&-1\\ 1&\varphi&-\varphi^{-1}&-1\\ \varphi&-1&-1&\varphi^{-1}\end{pmatrix}. \tag{4.37}\]
Here the ordering of objects is following the columns of Equation 4.36, i.e. \(X_{0},X_{2},X_{2}\), and then \(X_{0}\oplus X_{2}\).
This matrix corresponds to Lusztig's result in [45, Section 3.10] under reordering and normalizing by the square root of the dimension of \(\mathcal{Z}(\mathcal{C})\) and applying an involution as seen in [36, Remark before Proposition 3.1].
To be more precise we have \(\dim(\mathcal{Z}(\mathcal{C}))=\dim(\mathcal{C})^{2}\), hence the normalization divides by the dimension of \(\mathcal{C}\). This is \(\dim(\mathcal{C})=\dim(X_{0})^{2}+\dim(X_{2})^{2}=1^{2}+\varphi^{2}=\frac{5+ \sqrt{5}}{2}=\sqrt{5}\varphi\). Under the ordering \((X_{0},X_{0}\oplus X_{2},X_{2},X_{2})\) we then get
\[\frac{1}{\sqrt{5}}\begin{pmatrix}\varphi^{-1}&\varphi&1&1\\ \varphi&\varphi^{-1}&-1&-1\\ 1&-1&-\varphi^{-1}&\varphi\\ 1&-1&\varphi&-\varphi^{-1}\end{pmatrix}. \tag{4.38}\]
The final twist comes from the involution \((-)^{\flat}\), which sends \((i,j)\mapsto(i,p-j)\) if \(i\geq 0\) and is trivial otherwise, see [45, Section 3.1]. This interchanges both copies of \(X_{2}\) (the ones coming from the pairs \((1,2)\) and \((1,3)\)) and leaves the other two elements invariant. Under the involution we therefore get exactly the matrix of [45, Section 3.10].
_Remark 4.18_.: This calculation works generally for any \(n=2m\) even, see the calculations in [36, Section 3.4]. We have \(n^{2}\) objects in \(\mathcal{Z}(\mathcal{C}_{n})\) and hence \(m^{2}\) in \(\mathcal{Z}(\operatorname{Ad}(\mathcal{C}_{n}))\). The values of the normalized \(S\)-matrix coincide with the calculations done in [36].
As one example we can look at the entry corresponding to the unit pair \((X_{0},X_{0})\). In the \(S\)-matrix it is \(1\), while it is \(\frac{1}{\dim(\mathcal{C}_{n})}\) in the normalized \(S\)-matrix. The value of the pairing \(\langle(0,1),(0,1)\rangle\) is \(-\frac{(q-q^{-1})^{2}}{p}\), which are equal values.
#### 4.6.2. The case of odd \(n\)
Now we consider the category \(\operatorname{Ad}(\mathcal{C}_{2n+1})\). Here the restriction of the \(S\)-matrix is not invertible anymore, for example in Equation 4.27 the odd rows and columns give \(\begin{pmatrix}1&1\\ 1&1\end{pmatrix}.\)
Therefore, one cannot use Lemma 4.12 directly. There is an alternative way described in [15, Section 3].
_Construction 4.19_.: Let \(\mathcal{C}\) be a braided fusion category with braiding \(c\). For any fusion subcategory \(\mathcal{D}\subseteq\mathcal{C}\) we write \(\mathcal{D}^{\prime}\) for the _centralizer_, i.e. the full fusion subcategory consisting of all objects \((X,c)\in\mathcal{C}\) such that \(c_{X,Y}\circ c_{Y,X}=\operatorname{id}_{X\otimes Y}\) for all \((Y,c)\in\mathcal{D}\).
In this scenario \(\mathcal{C}\) is a \(\mathcal{D}\)-bimodule category. We define the relative center \(\mathcal{Z}_{\mathcal{D}}(\mathcal{C})\) as in [28, Section 2.2].
If \(\mathcal{C}\) is a \(G\)-graded fusion category the trivial component \(\mathcal{D}\coloneqq\mathcal{C}_{0}\subseteq\mathcal{C}\) is a fusion subcategory. By [28, Theorem 3.5] we have an isomorphism
\[\mathcal{Z}_{\mathcal{D}}(\mathcal{C})^{G}\simeq\mathcal{Z}(\mathcal{C}). \tag{4.39}\]
With this we can recover \(\mathcal{Z}(\mathcal{D})\) out of \(\mathcal{Z}(\mathcal{C})\). The simples objects in \(\mathcal{Z}(\mathcal{C})\) restricting to direct sums of the monoidal unit in \(\mathcal{C}\) under the forgetful functor \(\mathcal{Z}(\mathcal{C})\to\mathcal{C}\) form a subcategory \(\mathcal{E}\simeq\operatorname{Rep}(G)\subseteq\mathcal{Z}(\mathcal{C})\). We get an isomorphism
\[(\mathcal{E}^{\prime})_{G}\simeq\mathcal{Z}(\mathcal{D}), \tag{4.40}\]
where \((-)_{G}\) stands for the de-equivariantization.
Proof.: This is [15, Construction 3.1] using [28, Section 2+3 and Corollary 3.7]
**Example 4.20**.: We continue with Example 4.14. Here the categories \(\mathcal{C}_{n}\) are \(G\coloneqq\mathbb{Z}/2\mathbb{Z}\)-graded, and \(\mathcal{D}\coloneqq\operatorname{Ad}(\mathcal{C}_{n})\) is the even or adjoint part of the category.
We have seen that the subcategory \(\mathcal{E}\simeq\operatorname{Rep}(\mathbb{Z}/2\mathbb{Z})\) is generated by two copies of \(X_{0}\). The first, the monoidal unit, has trivial braidings. The second copies braidings are trivial on \(X_{0}\) and \(X_{2}\) but we have
\[c_{X_{0},X_{1}}:X_{1}\to X_{1},\ (-1), \tag{4.41}\]
for the braiding on \(X_{1}\). We write \((X_{0},\tilde{c})\) for this copy to distinguish it from the unit. From this we can compute the centralizer \(\mathcal{E}^{\prime}\). Note, that since the braiding of \((X_{0},\tilde{c})\) on \(X_{1}\) is non-trivial no copy of \(X_{1}\) can lie in \(\mathcal{E}^{\prime}\) as their braidings on \(X_{0}\) are trivial. All other objects however lie in the centralizer, i.e. all black objects in
\[\begin{array}{ccc}X_{0}&\raisebox{-1.72pt}{\scalebox{0.7}{$X_{1}$}}&X_{2} \\ \raisebox{-1.72pt}{\scalebox{0.7}{$X_{1}$}}&X_{0}\oplus X_{2}&\raisebox{-1.72pt}{ \scalebox{0.7}{$X_{1}$}}.\\ X_{2}&\raisebox{-1.72pt}{\scalebox{0.7}{$X_{1}$}}&X_{0}\end{array} \tag{4.42}\]
Under the de-equivariantization both copies of \(X_{0}\) and \(X_{2}\) in the corners will be isomorphic, the object \(X_{0}\oplus X_{2}\) however decomposes into two simple objects \(X_{0}\) and \(X_{2}\) not isomorphic to the others. In total we then get 4 simple objects in the center \(\mathcal{Z}(\operatorname{Ad}(\mathcal{C}_{n}))\).
The restriction of the \(S\)-matrix of \(\mathcal{Z}(\mathcal{C}_{n})\) to the objects \(X_{0},X_{2}\) and \(X_{0}\oplus X_{2}\) has the form
\[\begin{pmatrix}1&1&2\\ 1&1&-2\\ 2&-2&0\end{pmatrix}, \tag{4.43}\]
the \(S\)-matrix of \(\mathcal{Z}(\operatorname{Ad}(\mathcal{C}_{n}))\) is of the form
\[\begin{pmatrix}1&1&1&1\\ 1&1&-1&-1\\ 1&-1&1&-1\\ 1&-1&-1&1\end{pmatrix} \tag{4.44}\]
and we see that the sum of the third and fourth rows and columns is the same as before.
_Remark 4.21_.: The calculations regarding the centralizer of the \(\operatorname{Rep}(\mathbb{Z}/2\mathbb{Z})\) subcategory have been done by Lusztig in [45, Section 3.8] and, in more detail including the \(S\)-matrix in [36, Section 3.4]. We have \(n^{2}\) objects in the center of \(\mathcal{C}_{n}\). For \(n\) odd, i.e. \(n=2m+1\) we have \(2m^{2}+2m+1\) objects in \(\mathcal{E}\). Under the de-equivariantization we get isomorphisms from the objects \(X_{i}\boxtimes X_{j}\) to \(X_{m-i}\boxtimes X_{m-j}\). The object \(X_{m}\boxtimes X_{m}\) decomposes into a direct sum of two simples in \(\mathcal{Z}(\operatorname{Ad}(\mathcal{C}_{n}))\), hence we are left with \(m^{2}+m+2\) simples objects, as it was conjectured.
**Theorem 4.22**.: _Conjecture 1.1 holds for type \(I_{2}(n)\)._
Proof.: This is the result of the observations in this sections, specifically Lemma 4.16 gave a description of the center of the asymptotic Hecke category associated to the two-sided cell in type \(I_{2}(2n)\), and the last Remark 4.21 showed that the \(S\)-matrix of the center of \(\operatorname{Ad}(\mathcal{C}_{2n+1})\simeq\mathcal{H}^{h}\) is indeed the Fourier matrix by Lusztig.
## 5. The types \(H_{3}\) and \(H_{4}\)
We give an overview of the possible \(S\)-matrices occurring for Drinfeld centers of asymptotic Hecke algebras corresponding to \(J\)-cells in non-crystallographic finite Coxeter groups. The missing two types \(H_{3}\) and \(H_{4}\) are discussed and complemented by the works of the previous sections.
### Type \(H_{3}\) and \(H_{4}\)
All cells, their \(a\)-values and asymptotic Hecke algebras of the Coxeter groups \(H_{3}\) and \(H_{4}\) have already been computed. See for example [1] for data on \(H_{4}\). It turns out that the diagonal \(H\)-cells occurring are nearly always rather small, having only one or two elements. In these cases we have mostly only one possible categorification, hence the corresponding \(S\)-matrices are easy to list. In a couple of cases the associator is not known and more calculations are needed. However, combinatorial results by Broue and Malle, see [12, Section 7] tell us which categorification should be the right one assuming Conjecture 1.1 is true.
These observations have been made in [49, Section 8] by Mackaay, Mazorchuk, Miemitz, Tubbenhauer and Zhang. The asymptotic Hecke category associated to an \(H\)-cell is denoted there by \(\mathcal{A}_{\mathcal{H}}\) and called _asymptotic bicategory_. The construction can be found in [49, Section 3.2]. We reuse their results on asymptotic Hecke categories in types \(H_{3}\) and \(H_{4}\) and augment their observations by possible \(S\)-matrices.
Only in type \(H_{4}\) there is one cell with a considerably bigger \(H\)-cell. The \(J\)-cell of \(a\)-value \(6\) contains diagonal \(H\)-cells of sizes \(14\), \(18\) and \(24\). A description of the asymptotic Hecke category associated to it is unknown, there is however a combinatorial result [51] about the exact \(S\)-matrix of its center, which contains \(74\) simple objects, assuming Conjecture 1.1 is true in this case. Note, that this means that the centers of the three different categorifications \(\mathcal{H}_{W}^{h}\) of sizes \(14,18\) and \(24\) all need to be equivalent to a category with \(74\) simples.
In type \(H_{3}\) we have \(7\)\(J\)-cells with data
(5.1) \[\begin{array}{c||c|c|c|c|c|c|c|c|c}\omit\span\omit c\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\
**Theorem 5.1**.: _Let \(c\) be a two-sided cell in a Coxeter group of type \(I_{2}(n),H_{3},H_{4}\). The \(S\)-matrix of the center of the asymptotic Hecke category is of the following:_
* _If_ \(c\) _contains a diagonal_ \(H\)_-cell of size_ \(1\)_, such as the cells_ \(\{1\}\)_,_ \(\{w_{0}\}\) _in type_ \(I_{2}(n)\) _and the ones of case item A in type_ \(H\)_, the asymptotic Hecke category is_ \(\operatorname{Vec}\) _and the_ \(S\)_-matrix is_ (5.3) \[S=\left(1\right).\]
* _For the cases of item B in types_ \(H_{3}\) _and_ \(H_{4}\) _we have the asymptotic Hecke category to be_ \(\mathcal{H}_{W}^{h}\simeq\operatorname{Ad}(\mathcal{C}_{4})\)_. The normalized_ \(S\)_-matrix of its center is_ (5.4) _where_ \(\varphi=\frac{1+\sqrt{5}}{2}\)_, see Example_ 4.17_._
* _If_ \(c\) _is middle_ \(J\)_-cell in type_ \(W=I_{2}(n+1)\) _for_ \(n\geq 2\) _the asymptotic Hecke category we have_ \(\mathcal{H}_{W}^{c}\simeq\operatorname{Ad}(\mathcal{C}_{n})\)_. For_ \(n\) _even the_ \(S\)_-matrix is_ (5.5) \[S(\mathcal{Z}(\operatorname{Ad}(\mathcal{C}_{n})))=([(2i-1)(2j-1)])_{1\leq i, j\leq n}^{\otimes 2}\,,\] _for_ \([2]=q+q^{-1}\) _and_ \(q\) \(a\) \(2(n+1)\)_-th root of unity. The normalization factor_ \(S\)_-matrix is_ \(\frac{(q-q^{-1})^{2}}{n+1}\)_. For_ \(n=4\) _this is exactly the result of the previous item. For_ \(n\) _odd we have seen that in_ \(\mathcal{Z}(\mathcal{H}_{I_{2}(n+1)}^{h})\simeq\mathcal{Z}(\operatorname{Ad} (\mathcal{C}_{n}))\) _one simple object of_ \(\operatorname{Ad}(\mathcal{C}_{n})\boxtimes\operatorname{Ad}(\mathcal{C}_{n})\) _splits in the center, and the_ \(S\)_-matrix therefore includes the matrix (_5.5_) as well as two new rows and columns, which entries can be computed by the pairing of Lusztig, see_ (4.44) _in Example_ 4.20 _and (_4.28_). For_ \(I_{2}(4)\) _this gives for example_ \(S(\mathcal{Z}(\operatorname{Vec}(\mathbb{Z}/2\mathbb{Z})))\) _and for_ \(I_{2}(6)\) _we get_ \(S(\mathcal{Z}(\operatorname{Vec}(S_{3}))\) _see Corollary_ 3.12_._
* _In the case item D in type_ \(H\) _we had two possible categorifications. The_ \(S\)_-matrix in the the second options returns nearly_ \(S_{B}\)_, we only need to replace_ \([2]=\varphi\) _by_ \(\varphi^{-1}=\frac{1-\sqrt{5}}{2}\)_. We call this modified_ \(S\)_-matrix_ \(S_{F^{\prime}}\)_. The Fourier matrix in all of these cases as seen in_ _[_12_]_ _however is_ \(S_{B}\)_. If Conjecture_ 1.1 _is true we therefore expect to never see the second option._
* _In the case item C in type_ \(H\) _we again had two possible categorifications, namely the category of_ \(\mathbb{Z}/2\mathbb{Z}\)_-graded vector spaces, either with trivial or non-trivial twist. The possible normalized_ \(S\)_-matrices are_ (5.6) \[S_{C}\in\Biggl{\{}\frac{1}{2}\begin{pmatrix}1&1&1&1\\ 1&1&-1&-1\\ 1&-1&1&-1\\ 1&-1&-1&1\end{pmatrix},\frac{1}{2}\begin{pmatrix}1&1&1&1\\ 1&1&-1&-1\\ 1&-1&-1&1\\ 1&-1&1&-1\end{pmatrix}\Biggr{\}}.\]
_However, if Conjecture_ 1.1 _holds the Fourier matrix as computed by_ _[_12_]_ _is the second option, i.e. the same as in the exceptional cases of type_ \(E_{7}\) _and_ \(E_{8}\)_, see Corollary_ 3.14_._
* _And finally we expect the_ \(S\)_-matrix for the cell of_ \(a\)_-value_ \(6\) _in type_ \(H_{4}\) _to be the Fourier matrix computed by_ _[_51_]__._
## 6. Examples in infinite Coxeter groups
So far we have only considered finite Coxeter groups. In these cases it is clear that all two-sided cells themselves are also finite. However, one might also investigate finite two-sided cells lying in infinite Coxeter groups.
There are conjectured results on the structure of cells in infinite Coxeter groups, see [4, 3], however as far as the authors are aware there are no classification results on finite two-sided or \(H\)-cells.
We will focus this subsection on two known classifications and extend the description of \(S\)-matrices of asymptotic Hecke algebras to all finite two-sided cells of \(a\)-value lower or equal than \(2\).
The cell of \(a\)-value \(0\) is always finite as it only contains the neutral element. The asymptotic Hecke category in this case is the category of finite-dimensional vector spaces \(\operatorname{Vec}\), the \(S\)-matrix of its center is (1).
### The case of \(a(1)\)-finite Coxeter groups
Let \(W\) always denote a Coxeter group with generating set \(S\). We write \(W_{i}\coloneqq\{x\in W\mid a(x)=i\}\) for the subsets of elements of a given \(a\)-value. We say that \(W\) is _\(a(i)\)-finite_ if \(W_{i}\) is finite. Hart gave a characterization of \(a(1)\)-finite Coxeter groups in [32]. The set \(W_{1}\) has always an easy description:
_Remark 6.1_.: In any irreducible Coxeter group \(W\) the unique two-sided cell of \(a\)-value \(1\), \(W_{1}\), is characterized by consisting of elements of \(W\) which have a unique reduced expression. This can be seen for example in [9, Chapter 12]. Furthermore, the left and right cells inside it are partitioned by the right and left descending set of the element, i.e. \(x\sim_{L}y\) for \(a(x)=a(y)=1\) if and only if the unique reduced expression of both elements ends in the same reflection.
**Lemma 6.2** ([32, Theorem 2.1]).: _Let \(W\) be an irreducible Coxeter group with generating set \(S\). The set \(W_{1}\) of elements of \(a\)-value \(1\) is finite if and only if all of the following conditions hold:_
1. _The set_ \(S\) _is finite_
2. _The Dynkin diagram of_ \(W\) _is a tree_
3. _There is no relation_ \(m_{s,t}=\infty\) _and at most one relation_ \(m_{s,t}>3\) _for_ \(s,t\in S\)_._
Proof.: Remember that the set \(W_{1}\) is characterized by the set of words of \(W\) which have a unique reduced expression, see Remark 6.1.
The idea is the following: In a unique expression \((s_{1},s_{2},\ldots,s_{n})\in W\) there cannot be an \(i\) such that \(s_{i}\) and \(s_{i+1}\) commute, hence we can interpret any word as a path inside the Dynkin diagram of \(W\). The question is therefore to
decide when a path represents a reduced expression and when there are only finitely many of them.
A path represents a reduced expression if and only if there is no subsequence \((s,t,s\ldots,s)\) of length \(m_{s,t}\). Hence, if \(S\) is infinite we can construct infinitely many reduced expressions, similarly if there is a cycle in the Dynkin diagram or if we have \(m_{s,t}=\infty\) for some \(s,t\).
Now assume that there are two tuples \((s,t)\) and \((u,v)\) with \(m_{s,t},m_{u,v}>3\) and let \(p\) be a path connection both tuples. Without loss of generality, we have \(p=(t=r_{0},r_{1},\ldots,r_{n}=u)\). Let \(p^{-1}\) denote the reverse path. Then the composition \((p,v,p^{-1},s)\) represents a reduced expression and any power of it does too, hence we also have \(W_{1}\) to be infinite.
If now all our assumptions are satisfied we show that there is a finite number of paths giving a unique reduced expression. Let \(m_{s,t}\) be the biggest relation occurring. Any reduced expression of a path starting in an \(r\in S\) of length more than \(|S|\) includes one element \(u\in S\) at least twice. As the Dynkin diagram contains no circles we therefore can find a subsequence of the form \((\ldots,u,v,u,\ldots)\). This implies that \((u,v)\) is the edge \((s,t)\) with \(m_{s,t}>3\). Any path corresponding to a reduced expression can now repeat \(u,v\) a maximum of \(m_{s,t}-1\) times. Once the path leaves the edge it cannot come back, hence the length of a reduced expression is bounded and the size of \(W_{1}\) is therefore bounded as well.
**Lemma 6.3**.: _Let \(W\) be a Coxeter group with generating set \(M\) and let \(K,L\subset M\) be disjoint subsets of \(M\). We denote the Coxeter groups generated by \(K\) and \(L\) by \(U\) and \(V\), then \(U\times V\subset W\) is a subgroup of \(W\). For two-sided cells \(c_{1}\subset U\) and \(c_{2}\subset V\) of \(a\)-value \(i\) and \(j\) the Cartesian product \(c\coloneqq c_{1}\times c_{2}\subset W\) is a two-sided cell of \(a\)-value \(i+j\) inside \(W\). The asymptotic Hecke algebra \(J_{c}\) is also isomorphic to \(J_{c_{1}}\times J_{c_{2}}\)._
Proof.: This follows quickly from the observation that for \(x\in c_{1}\) and \(y\in c_{2}\) the Kazhdan-Lusztig basis elements commute, i.e. we have \(b_{x}b_{y}=b_{y}b_{x}\) and therefore \(b_{(x,y)}=b_{x}b_{y}\). The cell and \(a\)-value computations now work independently in both summands.
**Corollary 6.4**.: _The conclusion of Lemma 6.2 still holds when \(W\) is not to be assumed irreducible. The assumptions (2) and (3) then need to hold for any connected component of the Dynkin diagram._
Proof.: We have seen in Lemma 6.3 that the \(a\)-value of a cell \(c\times d\) for \(c\) and \(d\) lying in different Coxeter groups is the sum of their \(a\)-values. Let now \(S=\coprod S_{i}\) be a disjoint union where each \(S_{i}\) represents a connected component of the Dynkin diagram. Then any cell \(c\subset W(S)\) of \(a\)-value \(1\) has the form \(\{1\}\times\{1\}\times\ldots\times c_{i}\times\ldots\times\{1\}\), for \(c_{i}\) being a cell of \(a\)-value \(1\) lying in \(W(S_{i})\). We can now apply Lemma 6.2.
**Corollary 6.5**.: _Let \(W\) be an \(a(1)\)-finite irreducible Coxeter group and let \(c\subset W\) be a two-sided cell. Let \(m\) be the value of the biggest relation occurring in the Dynkin diagram. For any tuple \((r,s)\) of reflections in \(W\) there is a
unique \(H\)-cell \(h_{r,s}\) where all words start in \(r\) and end in \(s\). The size is \(\lfloor\frac{m}{2}\rfloor\) if the shortest path connecting \(r\) and \(s\) includes the edge \(m\) and \(\lfloor\frac{m-1}{2}\rfloor\) if not._
Proof.: This follows from the proof of Lemma 6.2 by counting the number of paths corresponding to reduced expression. We need the characterization of left and right cells inside \(W_{1}\) by starting and ending letter as seen in Remark 6.1. An enumeration of \(W_{1}\) can also be found in [32, Theorem 2.5].
**Theorem 6.6**.: _Let \(W\) be an \(a(1)\)-finite Coxeter group and let \(c\) be a two-sided cell of \(W\). Then one can choose an \(H\)-cell \(h\subset c\) such that the asymptotic Hecke category \(\mathcal{H}_{W}^{h}\) is equivalent to \(\operatorname{Ad}(\mathcal{C}_{n})\) for some \(n\), i.e. the center and the \(S\)-matrix are the same as in the dihedral case as seen in Theorem 5.1._
Proof.: Following Corollary 6.4 we can assume that \(W\) is irreducible. Let \(s,t\) be generators of \(W\) such that \(m_{s,t}\) is maximal (i.e. we take the unique tuple \((s,t)\) such that \(m_{s,t}\) is greater than \(3\) if it exist). We now choose the \(H\)-cell starting and ending in \(s\), \(h:=h_{s,s}\). This cell is then the same as the \(H\)-cell of the subgroup generated only by \(s\) and \(t\), a dihedral group of order \(2m_{s,t}\). All computations of \(\mathcal{H}_{W}^{h}\) therefore reduce to the finite dihedral case.
**Example 6.7**.: One such Coxeter group has appeared in [4, Figure 1]. The Coxeter group is of type \(W_{237}\) with generators \(\langle r,s,t\mid r^{2}=s^{2}=t^{2}=(rs)^{3}=(st)^{7}=(rt)^{2}=1\rangle\). Following Corollary 6.5 we can enumerate all elements of \(a\)-value \(1\) by looking for paths corresponding to reduced expressions. We order these elements by starting and ending letter, i.e. we partition them into left and right cells: On the diagonal \(H\)-cell coming from the dihedral
\[\begin{array}{c|c|c}\{r,rstsr,rststsr\}&\{sr,stsr,ststsr\}&\{tsr,tstsr,tstsr \}\\ \hline\{rs,rststs,rststs\}&\{s,sts,stssts\}&\{ts,tsts,tsts\}\\ \hline\{rst,rstst,rststst\}&\{st,stst,ststst\}&\{t,stst,tstst\}\end{array}\]
subgroup of type \(I_{2}(7)\) the multiplication on the asymptotic Hecke algebra can be read of directly. We have for example \(j_{sts}^{2}=j_{s}+j_{sts}+j_{stssts}\). Similar\(l\)\(<\), one can work out the complete multifusion ring structure and get for example \(j_{sr}j_{rststsr}=j_{stststr}\). The center of the asymptotic Hecke category has \(14\) simple objects.
### The case of \(a(2)\)-finite Coxeter groups
Recent results by Green and Xu classified all irreducible Coxeter groups which are \(a(2)\)-finite. Coxeter groups where the Dynkin diagram contains a cycle have either none or infinitely many elements of \(a\)-value \(2\). For all other cases they further always described one \(H\)-cell lying in \(W_{2}\). We list their results and show that the \(S\)-matrix of the asymptotic Hecke category is the same as in the dihedral case of Theorem 5.1, by choosing an appropriate \(H\)-cell in which the asymptotic Hecke algebra is isomorphic to the Grothendieck ring of \(\operatorname{Ad}(\mathcal{C}_{n})\).
**Proposition 6.8**.: _[_31_, Theorem 3.31 and Proposition 4.15]_ _and [30, Porposition 4.1] An irreducible Coxeter group \(W\) with elements of \(a\)-value \(2\) is \(a(2)\)-finite if and only if it is of one of the following types:_
\[A_{n},\ B_{n},\ \tilde{C}_{n},\ E_{q,r},\ F_{n},\ H_{n},\ I_{n}, \tag{6.1}\]
_where_
(6.2) \[\tilde{C}_{n}=\raisebox{-1.72pt}{\includegraphics[width=14.226378pt]{./././..
Proof.: Of the classification in Proposition 6.8 we are only concerned with the infinite cases \(\tilde{C}_{n},E_{q,r},F_{m},H_{l}\) for \(q\geq 2\) and \(r+q\geq 7,\)\(m>4,l>5.\) If the \(H\)-cell given has size \(1\) then the only possible categorification of the asymptotic Hecke algebra is Vec.
Therefore, we only need to check the other \(3\) remaining cases. In all of them we find that the \(H\)-cell lies in a finite parabolic subgroup in which it is also an \(H\)-cell of \(a\)-value \(2.\) In this parabolic subgroup \(h\) can be written as \(h_{1}\times h_{2},\) where both are of \(a\)-value \(1\) inside the respective Coxeter group, see Lemma 6.3. One can therefore deduce the structure of the asymptotic Hecke algebra from finite cases.
* Case \(F_{m}\) for \(m>4\): The \(H\)-cell \(h=\{24,243524\}\) lies in the parabolic subgroup \(B_{4}\) where we identify the generators \(i\) of \(B_{4}\) by \(i+1\) inside \(F_{4}.\) The asymptotic Hecke algebra in this case is the Fibonacci ring, where \(j_{24}\) is the identity and \(j_{243524}^{2}=j_{24}+j_{243524}.\) Therefore, Lemma 4.7 holds and we categorify over a type \(A_{n}\) category as in the dihedral case.
* Case \(H_{n}\) for \(n>3\): The \(H\)-cell \(h=\{24,2124\}\) lies in the finite parabolic subgroup \(I_{2}(5)\times A_{1},\) where we identify the generators \(1\) and \(2\) with those of \(I_{2}(5)\) and \(4\) with the one of \(A_{1}.\) This case therefore also reduces to the observations of Theorem 5.1.
* Case \(\tilde{C}_{n}\): The \(H\)-cell \(h=\{24,2124,2z,212z\}\) lies in the parabolic subgroup \(I_{2}(4)\times B_{n-3}.\) It is the product of their unique cells of \(a\)-value \(1,\) namely \(\{2,212\}\) in \(I_{2}(4)\) and \(\{4,z\}\) in \(B_{4}.\) Both asymptotic Hecke categories associated to them are again the Fibonacci category. The asymptotic Hecke category associated to \(h\) is therefore the product \(K(F)\times K(F).\) It could be that there are different categories categorifying this ring, however by a classification result on small rank fusion categories [37, Theorem 1.1] only the Deligne tensor product of the Fibonacci category with itself lies over the given fusion ring. The asymptotic Hecke category \(\mathcal{H}_{W}^{h}\) is therefore equivalent to \(F\boxtimes F,\) the center is \(\mathcal{Z}(F\boxtimes F)\simeq\mathcal{Z}(F)\boxtimes\mathcal{Z}(F)\) and the \(S\)-matrix is given by the Kronecker product as stated.
| 2015年にLusztig[Bull. Inst. Math. Acad. Sin. (N.S.)10(2015), no.1, 1-72]は、代数閉包の有限体上の連結 reductive 群に対する関連する (幾何的) Hecke kategoriは、双方向の Kazhdan--Lusztig 細胞の中に一層の切断が可能なことを示しました。これは、漸近代数 (J-ring) のカテゴリ形成であり、そのカテゴリ中心は、この「漸近 Heckecategory」の細胞にサポートされている未単体特性のカテゴリに等しいことが示されました。その後、Lusztigは、任意の有限 Coxeter グループのための漸近 Heckecategory を構築できることを示しました。Lusztig は、これらのカテゴリのセンターが modulartensor
カテゴリであるという仮説を立て、これは Elias と Williamson が証明しました。また |
2303.15385 | Recognizing Rigid Patterns of Unlabeled Point Clouds by Complete and
Continuous Isometry Invariants with no False Negatives and no False Positives | Rigid structures such as cars or any other solid objects are often
represented by finite clouds of unlabeled points. The most natural equivalence
on these point clouds is rigid motion or isometry maintaining all inter-point
distances. Rigid patterns of point clouds can be reliably compared only by
complete isometry invariants that can also be called equivariant descriptors
without false negatives (isometric clouds having different descriptions) and
without false positives (non-isometric clouds with the same description). Noise
and motion in data motivate a search for invariants that are continuous under
perturbations of points in a suitable metric. We propose the first continuous
and complete invariant of unlabeled clouds in any Euclidean space. For a fixed
dimension, the new metric for this invariant is computable in a polynomial time
in the number of points. | Daniel Widdowson, Vitaliy Kurlin | 2023-03-27T16:58:39 | http://arxiv.org/abs/2303.15385v1 | Recognizing Rigid Patterns of Unlabeled Point Clouds by Complete and Continuous Isometry Invariants with no False Negatives and no False Positives
###### Abstract
Rigid structures such as cars or any other solid objects are often represented by finite clouds of unlabeled points. The most natural equivalence on these point clouds is rigid motion or isometry maintaining all inter-point distances.
Rigid patterns of point clouds can be reliably compared only by complete isometry invariants that can also be called equivariant descriptors without false negatives (isometric clouds having different descriptions) and without false positives (non-isometric clouds with the same description).
Noise and motion in data motivate a search for invariants that are continuous under perturbations of points in a suitable metric. We propose the first continuous and complete invariant of unlabeled clouds in any Euclidean space. For a fixed dimension, the new metric for this invariant is computable in a polynomial time in the number of points.
## 1 Strong motivations for complete invariants
In Computer Vision, real objects such as cars and solid obstacles are considered rigid and often represented by a finite set \(C\subset\mathbb{R}^{n}\) (called a _cloud_) of \(m\)_unlabeled_ (or unordered) points, usually in low dimensions \(n=2,3,4\).
The rigidity of many real objects motivates the most fundamental equivalence of _rigid motion_[68], a composition of translations and rotations in \(\mathbb{R}^{n}\). In a general metric space \(M\), the most relevant equivalence is _isometry_: any map \(M\to M\) maintaining all inter-point distances in \(M\).
Any isometry in \(\mathbb{R}^{n}\) is a composition of a mirror reflection with some rigid motion. Any orientation-preserving isometry can be realized as a continuous rigid motion.
There is no sense in distinguishing rigid objects that are related by isometry or having the same shape. Formally, the _shape_ of a cloud \(C\) is its isometry class [49] defined as a collection of all infinitely many clouds isometric to \(C\).
The only reliable tool for distinguishing clouds up to isometry is an _invariant_ defined as a function or property preserved by any isometry. Since any isometry is bijective, the number of points is an isometry invariant, but the coordinates of points are not invariants even under translation. This simple invariant is _incomplete_ (non-injective) because non-isometric clouds can have different numbers of points.
Any invariant \(I\) maps all isometric clouds to the same value. There are no isometric clouds \(C\cong C^{\prime}\) with \(I(C)\neq I(C^{\prime})\), meaning that \(I\) has _no false negatives_. Isometry invariants are also called _equivariant descriptors_[56].
A _complete_ invariant \(I\) should distinguish all non-isometric clouds, so if \(C\not\cong C^{\prime}\) then \(I(C)\neq I(C^{\prime})\). Equivalently, if \(I(C)=I(C^{\prime})\) then \(C\cong C^{\prime}\), so \(I\) has _no false positives_. Then \(I\) can be considered as a DNA-style code or genome that identifies any cloud uniquely up to isometry.
Since real data is always noisy and motions of rigid objects are important to track, a useful complete invariant must be also continuous under the movement of points.
A complete and continuous invariant for \(m=3\) points consists of three pairwise distances (sides of a triangle) and is known in school as the SSS theorem [69]. But all pairwise distances are incomplete for \(m\geq 4\)[9], see Fig. 1.
**Problem 1.1** (complete isometry invariants with computable continuous metrics).: _For any cloud of \(m\) unlabeled points in \(\mathbb{R}^{n}\), find an invariant \(I\) satisfying the properties_
_(a)_ completeness _:_ \(C,C^{\prime}\) _are isometric_ \(\Leftrightarrow I(C)=I(C^{\prime})\)_;_
_(b)_ Lipschitz continuity _: if any point of_ \(C\) _is perturbed within its_ \(\varepsilon\)_-neighborhood then_ \(I(C)\) _changes by at most_ \(\lambda\varepsilon\) _for a constant_ \(\lambda\) _and a metric_ \(d\) _satisfying these axioms:_
_1)_ \(d(I(C),I(C^{\prime}))=0\) _if and only if_ \(C\cong C^{\prime}\) _are isometric,_
_2)_ symmetry _:_ \(d(I(C),I(C^{\prime}))=d(I(C^{\prime}),I(C))\)_,_
_3)_ \(d(I(C),I(C^{\prime}))+d(I(C^{\prime}),I(C^{\prime\prime}))\geq d(I(C),I(C^{ \prime\prime}))\)_;_
_(c)_ computability _:_ \(I\) _and_ \(d\) _are computed in a polynomial time in the number_ \(m\) _of points for a fixed dimension_ \(n\)
Condition (1.1b) asking for a continuous metric is stronger than the completeness in (1.1a). Detecting an isometry \(C\cong C^{\prime}\) gives a discontinuous metric, say \(d=1\) for all non-isometric clouds \(C\ncong C^{\prime}\) even if \(C,C^{\prime}\) are nearly identical. Any metric \(d\) satisfying the first axiom in (1.1b) detects an isometry \(C\cong C^{\prime}\) by checking if \(d=0\).
Theorem 4.7 will solve Problem 1.1 for any \(m\) in \(\mathbb{R}^{n}\). Continuous invariants in Theorem 3.10 are conjectured to be complete (no known counter-examples) in any metric space. The first author implemented all algorithms, the second author wrote all theory, proofs, examples in [39, 40].
## 2 Past work on cloud recognition/classification
**Labeled clouds**\(C\subset\mathbb{R}^{n}\) are easy for isometry classification because the matrix of distances \(d_{ij}\) between indexed points \(p_{i},p_{j}\) allows us to reconstruct \(C\) by using the known distances to the previously constructed points [29, Theorem 9]. For any clouds of the same number \(m\) of labeled points, the difference between \(m\times m\) matrices of distances (or Gram matrices of \(p_{i}\cdot p_{j}\)) can be converted into a continuous metric by taking a matrix norm. If the given points are unlabeled, comparing \(m\times m\) matrices requires \(m!\) permutations, which makes this approach impractical.
**Multidimensional scaling** (MDS). For a given \(m\times m\) distance matrix of any \(m\)-point cloud \(A\), MDS [58] finds an embedding \(A\subset\mathbb{R}^{k}\) (if it exists) preserving all distances of \(M\) for a dimension \(k\leq m\). A final embedding \(A\subset\mathbb{R}^{k}\) uses eigenvectors whose ambiguity up to signs gives an exponential comparison time that can be close to \(O(2^{m})\).
**Isometry detection** refers to a simpler version of Problem 1.1 to algorithmically detect a potential isometry between given clouds of \(m\) points in \(\mathbb{R}^{n}\). The best algorithm by Brass and Knauer [10] takes \(O(m^{\lceil n/3\rceil}\log m)\) time, so \(O(m\log m)\) in \(\mathbb{R}^{3}\)[11]. These algorithms output a binary answer (yes/no) without quantifying similarity between non-isometric clouds by a continuous metric.
**The Hausdorff distance**[31] can be defined for any subsets \(A,B\) in an ambient metric space as \(d_{H}(A,B)=\max\{\vec{d}_{H}(A,B),\vec{d}_{H}(B,A)\}\), where the directed Hausdorff distance is \(\vec{d}_{H}(A,B)=\sup\limits_{p\in A}\inf\limits_{q\in B}|p-q|\). To take into account isometries, one can minimize the Hausdorff distance over all isometries [16, 18, 33]. For \(n=2\), the Hausdorff distance minimized over isometries in \(\mathbb{R}^{2}\) for sets of at most \(m\) point needs \(O(m^{5}\log m)\) time [17]. For a given \(\varepsilon>0\) and \(n>2\), the related problem to decide if \(d_{H}\leq\varepsilon\) up to translations has the time complexity \(O(m^{\lceil(n+1)/2\rceil})\)[70, Chapter 4, Corollary 6]. For general isometry, only approximate algorithms tackled minimizations for infinitely many rotations initially in \(\mathbb{R}^{3}\)[27] and in \(\mathbb{R}^{n}\)[4, Lemma 5.5].
**The Gromov-Wasserstein distances** can be defined for metric-measure spaces, not necessarily sitting in a common ambient space. The simplest Gromov-Hausdorff (GH) distance cannot be approximated with any factor less than 3 in polynomial time unless P = NP [57, Corollary 3.8]. Polynomial-time algorithms for GH were designed for ultrametric spaces [46]. However, GH spaces are challenging even for point clouds sets in \(\mathbb{R}\), see [42] and [74].
**The Heat Kernel Signature** (\(\mathrm{HKS}\)) is a complete isometry invariant of a manifold \(M\) whose the Laplace-Beltrami operator has distinct eigenvalues by [65, Theorem 1]. If \(M\) is sampled by points, \(\mathrm{HKS}\) can be discretized and remains continuous [65, section 4] but the completeness is unclear.
**Equivariant descriptors** can be experimentally optimized [48, 60] on big datasets of clouds that are split into predefined clusters. Using more hidden parameters can improve accuracy on any finite dataset at a higher cost but will require more work for any new data. Point cloud registration filters outliers [59], samples rotations for Scale Invariant Feature Transform or uses a basis [64, 66, 53, 67], which can be unstable under perturbations of a cloud. The PCA-based complete invariant of unlabelled clouds [36] can discontinuously change when a basis degenerates to a lower dimensional subspace but inspired Complete Neural Networks [32] though without the Lipschitz continuity.
**Geometric Deep Learning** produces descriptors that are equivariant by design [14] and go beyond Euclidean space \(\mathbb{R}^{n}\)[15], hence aiming to experimentally solve Problem 1.1. Motivated by obstacles in [1, 19, 20, 41, 30], Problem 1.1 needs a justified solution without relying on finite data.
**Geometric Data Science** solves analogs of Problem 1.1 for any real data objects considered up to practical equivalences instead of rigid motion on clouds [24, 25, 62]: 1-periodic discrete series [5, 36, 6], 2D lattices [13, 38], 3D lattices [12, 37, 35, 47], periodic point sets in \(\mathbb{R}^{3}\)[21, 63] and in higher dimensions [2, 3, 4]. The applications of to crystalline materials [7, 54, 67, 75] led to the _Crystal Isometry Principle_[71, 72, 73] extending Mendeleev's table of elements to the _Crystal Isometry Space_ of all periodic crystals parametrised by complete invariants like a geographic map of a planet.
**Local distributions of distances** in Memoli's seminal work [44, 45] for metric-measure spaces, or shape distributions [8, 28, 43, 50], are first-order versions of the new \(\mathrm{SDD}\) below.
## 3 Simplexwise Distance Distribution (SDD)
We will refine Sorted Distance Vector in any metric space to get a complete invariant in \(\mathbb{R}^{n}\) as shown in Fig. 2. All proofs from sections 3 and 4 are in [39, 40], respectively.
The _lexicographic_ order \(u<v\) on vectors \(u=(u_{1},\ldots,u_{h})\) and \(v=(v_{1},\ldots,v_{h})\) in \(\mathbb{R}^{h}\) means that if the first \(i\) (possibly, \(i=0\)) coordinates of \(u,v\) coincide then \(u_{i+1}<v_{i+1}\). Let \(S_{h}\) denote the permutation group on indices \(1,\ldots,h\).
**Definition 3.1** (\(\mathrm{RDD}(C;A)\)).: _Let \(C\) be a cloud of \(m\) unlabeled points in a space with a metric \(d\). Let \(A=(p_{1},\ldots,p_{h})\subset C\) be an ordered subset of \(1\leq h<m\) points. Let \(D(A)\) be the triangular distance matrix whose entry \(D(A)_{i,j-1}\) is \(d(p_{i},p_{j})\) for \(1\leq i<j\leq h\), all other entries are filled by zeros. Any permutation \(\xi\in S_{h}\) acts on \(D(A)\) by mapping \(D(A)_{ij}\) to \(D(A)_{kl}\), where \(k\leq l\) is the pair of indices \(\xi(i),\xi(j)-1\) written in increasing order._
_For any other point \(q\in C-A\), write distances from \(q\) to \(p_{1},\ldots,p_{h}\) as a column. The \(h\times(m-h)\)-matrix \(R(C;A)\) is formed by these \(m-h\) lexicographically ordered columns. The action of \(\xi\) on \(R(C;A)\) maps any \(i\)-th row to the \(\xi(i)\)-th row, after which all columns can be written again in the lexicographic order. The Relative Distance Distribution \(\mathrm{RDD}(C;A)\) is the equivalence class of the pair \([D(A),R(C;A)]\) of matrices up to permutations \(\xi\in S_{h}\)._
For a 1-point subset \(A=\{p_{1}\}\) with \(h=1\), the matrix \(D(A)\) is empty and \(R(C;A)\) is a single row of distances (in the increasing order) from \(p_{1}\) to all other points \(q\in C\). For a 2-point subset \(A=(p_{1},p_{2})\) with \(h=2\), the matrix \(D(A)\) is the single number \(d(p_{1},p_{2})\) and \(R(C;A)\) consists of two rows of distances from \(p_{1},p_{2}\) to all other points \(q\in C\).
**Example 3.2** (\(\mathrm{RDD}\) for a 3-point cloud \(C\)).: _Let \(C\subset\mathbb{R}^{2}\) consist of \(p_{1},p_{2},p_{3}\) with inter-point distances \(a\leq b\leq c\) ordered counter-clockwise as in Fig. 3 (left). Then_
\[\mathrm{RDD}(C;p_{1})=[\emptyset;(b,c)],\mathrm{RDD}(C;\left(\begin{array}{ }p_{2}\\ p_{3}\end{array}\right))=[a;\left(\begin{array}{c}c\\ b\end{array}\right)],\]
\[\mathrm{RDD}(C;p_{2})=[\emptyset;(a,c)],\mathrm{RDD}(C;\left(\begin{array}{ }p_{3}\\ p_{1}\end{array}\right))=[b;\left(\begin{array}{c}a\\ c\end{array}\right)],\]
\[\mathrm{RDD}(C;p_{3})=[\emptyset;(a,b)],\mathrm{RDD}(C;\left(\begin{array}{ }p_{1}\\ p_{2}\end{array}\right))=[c;\left(\begin{array}{c}b\\ a\end{array}\right)].\]
_We will always represent \(\mathrm{RDD}\) for a specified order \(A=(p_{i},p_{j})\) of points that are written as a column. Swapping the points \(p_{1}\leftrightarrow p_{2}\) makes the last \(\mathrm{RDD}\) above equivalent to another form: \(\mathrm{RDD}(C;\left(\begin{array}{c}p_{2}\\ p_{1}\end{array}\right))=[c;\left(\begin{array}{c}a\\ b\end{array}\right)]\)._
Though \(\mathrm{RDD}(C;A)\) is defined up to a permutation \(\xi\) of \(h\) points in \(A\subset C\), we later use only \(h=n\), which makes comparisons of \(\mathrm{RDD}\)s practical in dimensions \(n=2,3\). Metrics on isometry classes of \(C\) will be independent of \(\xi\).
**Definition 3.3** (Simplexwise Distance Distribution \(\mathrm{SDD}(C;h)\)).: _Let \(C\) be a cloud of \(m\) unlabeled points in a metric space. For an integer \(1\leq h<m\), the Simplexwise Distance Distribution \(\mathrm{SDD}(C;h)\) is the unordered set of \(\mathrm{RDD}(C;A)\) for all unordered \(h\)-point subsets \(A\subset C\)._
For \(h=1\) and any \(m\)-point cloud \(C\), the distribution \(\mathrm{SDD}(C;1)\) can be considered as a matrix of \(m\) rows of ordered distances from every point \(p\in C\) to all other \(m-1\) points. If we lexicographically order these \(m\) rows and collapse any \(l>1\) identical rows into a single one with the weight \(l/m\), then we get the Pointwise Distance Distribution \(\mathrm{PDD}(C;m-1)\) introduced in [72, Definition 3.1].
The PDD was simplified to the easier-to-compare vector of Average Minimum Distances [73]: \(\mathrm{AMD}_{k}(C)=\dfrac{1}{m}\sum\limits_{i=1}^{m}d_{ik}\), where \(d_{ik}\) is the distance from a point \(p_{i}\in C\) to its \(k\)-th nearest neighbor in \(C\). These neighbor-based invariants can be computed in a near-linear time in \(m\)[23] and were pairwise compared for all all 660K+ periodic crystals in the world's largest database of real materials [72]. Definition 3.4 similarly maps \(\mathrm{SDD}\) to a smaller invariant.
Recall that the 1st moment of a set of numbers \(a_{1},\ldots,a_{k}\) is the _average_\(\mu=\dfrac{1}{k}\sum\limits_{i=1}^{k}a_{i}\). The 2nd moment is the _standard deviation_\(\sigma=\sqrt{\dfrac{1}{k}\sum\limits_{i=1}^{k}(a_{i}-\mu)^{2}}\). For \(l\geq 3\), the \(l\)-th _standardized moment_[34, section 2.7] is \(\dfrac{1}{k}\sum\limits_{i=1}^{k}\left(\dfrac{a_{i}-\mu}{\sigma}\right)^{l}\).
Figure 2: Hierarchy of new invariants on top of the classical \(\mathrm{SDV}\).
**Definition 3.4** (Simplexwise Distance Moments \(\mathrm{SDM}\)).: _For any \(m\)-point cloud \(C\) in a metric space, let \(A\subset C\) be a subset of \(h\) unordered points. The Sorted Distance Vector \(\mathrm{SDV}(A)\) is the list of all \(\frac{h(h-1)}{2}\) pairwise distances between points of \(A\) written in increasing order. The vector \(\vec{R}(C;A)\in\mathbb{R}^{m-h}\) is obtained from the \(h\times(m-h)\) matrix \(R(C;A)\) in Definition 3.1 by writing the vector of \(m-h\) column averages in increasing order._
_The pair \([\mathrm{SDV}(A);\vec{R}(C;A)]\) is the Average Distance Distribution \(\mathrm{ADD}(C;A)\) considered as a vector of length \(\frac{h(h-3)}{2}+m\). The unordered collection of \(\mathrm{ADD}(C;A)\) for all \(\binom{m}{h}\) unordered subsets \(A\subset C\) is the Average Simplexwise Distribution \(\mathrm{ASD}(C;h)\). The Simplexwise Distance Moment \(\mathrm{SDM}(C;h,l)\) is the 1-th (standardized for \(l\geq 3\)) moment of \(\mathrm{ASD}(C;h)\) considered as a probability distribution of \(\binom{m}{h}\) vectors, separately for each coordinate._
**Example 3.5** (\(\mathrm{SDD}\) and \(\mathrm{SDM}\) for \(T,K\)).: _Fig. 1 shows the non-isometric 4-point clouds \(T,K\) with the same Sorted Distance Vector \(\mathrm{SDV}=\{\sqrt{2},\sqrt{2},2,\sqrt{10},\sqrt{10},4\}\), see infinitely many examples in [9]. The arrows on the edges of \(T,K\) show orders of points in each pair of vertices for \(\mathrm{RDDs}\). Then \(T,K\) are distinguished up to isometry by \(\mathrm{SDD}(T;2)\neq\mathrm{SDD}(K;2)\) in Table 1. The 1st coordinate of \(\mathrm{SDM}(C;2,1)\in\mathbb{R}^{3}\) is the average of 6 distances from \(\mathrm{SDV}(T)=\mathrm{SDV}(K)\) but the other two coordinates (column averages from \(R(C;A)\) matrices) differ._
Some of the \(\binom{m}{h}\)\(\mathrm{RDDs}\) in \(\mathrm{SDD}(C;h)\) can be identical as in Example 3.5. If we collapse any \(l>1\) identical \(\mathrm{RDDs}\) into a single \(\mathrm{RDD}\) with the _weight_\(l/\binom{m}{h}\), \(\mathrm{SDD}\) can be considered as a weighted probability distribution of \(\mathrm{RDDs}\).
The \(m-h\) permutable columns of the matrix \(R(C;A)\) in \(\mathrm{RDD}\) from Definition 3.1 can be interpreted as \(m-h\) unlabeled points in \(\mathbb{R}^{h}\). Since any isometry is bijective, the simplest metric respecting bijections is the bottleneck distance, which is also called the Wasserstein distance \(W_{\infty}\).
**Definition 3.6** (bottleneck distance \(W_{\infty}\)).: _For any vector \(v=(v_{1},\ldots,v_{n})\in\mathbb{R}^{n}\), the Minkowski norm is \(||v||_{\infty}=\max\limits_{1,\ldots,m}|v_{i}|\). For any vectors or matrices \(N,N^{\prime}\) of the same size, the Minkowski distance is \(L_{\infty}(N,N^{\prime})=\max\limits_{i,j}|N_{ij}-N^{\prime}_{ij}|\). For clouds \(C,C^{\prime}\subset\mathbb{R}^{n}\) of \(m\) unlabeled points, the bottleneck distance \(W_{\infty}(C,C^{\prime})=\inf\limits_{g:C\to C^{\prime}}\sup\limits_{p\in C }||p-g(p)||_{\infty}\) is minimized over all bijections \(g:C\to C^{\prime}\)._
**Lemma 3.7** (the max metric \(M_{\infty}\) on \(\mathrm{RDDs}\)).: _For any \(m\)-point clouds and ordered \(h\)-point subsets \(A\subset C\) and \(A^{\prime}\subset C^{\prime}\), set \(d(\xi)=\max\{L_{\infty}(\xi(D(A)),D(A^{\prime})),W_{\infty}(\xi(R(C;A)),R(C^{ \prime};A^{\prime}))\}\) for a permutation \(\xi\in S_{h}\) on \(h\) points. Then the max metric \(M_{\infty}(\mathrm{RDD}(C;A),\mathrm{RDD}(C^{\prime};A^{\prime}))=\min\limits_ {\xi\in S_{h}}d(\xi)\) satisfies all metric axioms on \(\mathrm{RDDs}\) from Definition 3.1 and can be computed in time \(O(h!(h^{2}+m^{1.5}\log^{h}m))\)._
We will use only \(h=n\) for Euclidean space \(\mathbb{R}^{n}\), so the factor \(h!\) in Lemma 3.7 is practically small for \(n=2,3\).
For \(h=1\) and a 1-point subset \(A\subset C\), the matrix \(D(A)\) is empty, so \(d(\xi)=W_{\infty}(\xi(R(C;A)),R(C^{\prime};A^{\prime}))\). The metric \(M_{\infty}\) on \(\mathrm{RDDs}\) will be used for intermediate costs to get metrics between two unordered collections of \(\mathrm{RDDs}\) by using standard Definitions 3.8 and 3.9 below.
**Definition 3.8** (Linear Assignment Cost LAC [26]).: _For any \(k\times k\) matrix of costs \(c(i,j)\geq 0\), \(i,j\in\{1,\ldots,k\}\), the Linear Assignment Cost \(\mathrm{LAC}=\frac{1}{k}\min\limits_{g}\sum\limits_{i=1}^{k}c(i,g(i))\) is minimized for all bijections \(g\) on the indices \(1,\ldots,k\)._
The normalization factor \(\frac{1}{k}\) in \(\mathrm{LAC}\) makes this metric better comparable with \(\mathrm{EMD}\) whose weights sum up to 1.
\begin{table}
\begin{tabular}{l|l|l} \(\mathrm{RDD}(T;A)\) in \(\mathrm{SDD}(T;2)\) & \(\mathrm{RDD}(K;A)\) in \(\mathrm{SDD}(K;2)\) \\ \([\sqrt{2},\left(\begin{array}{cc}2&\sqrt{10}\\ \sqrt{10}&4\end{array}\right)]\times 2\) & \([\sqrt{2},\left(\begin{array}{cc}2&\sqrt{10}\\ \sqrt{2},&4\end{array}\right)]\times 2\) \\ \([2,\left(\begin{array}{cc}\sqrt{2}&\sqrt{10}\\ \sqrt{10}&\sqrt{2}\end{array}\right)]\) & \([2,\left(\begin{array}{cc}\sqrt{2}&\sqrt{10}\\ \sqrt{2}&\sqrt{10}\end{array}\right)]\) \\ \([\sqrt{10},\left(\begin{array}{cc}\sqrt{2}&\sqrt{4}\\ \sqrt{2}&\sqrt{2}\end{array}\right)]\times 2\) & \([\sqrt{10},\left(\begin{array}{cc}\sqrt{2}&2\\ \sqrt{4}&\sqrt{10}\end{array}\right)]\times 2\) \\ \([4,\left(\begin{array}{cc}\sqrt{2}&\sqrt{10}\\ \sqrt{10}&\sqrt{2}\end{array}\right)]\) & \([4,\left(\begin{array}{cc}\sqrt{2}&\sqrt{2}\\ \sqrt{10}&\sqrt{10}\end{array}\right)]\) \\ \hline \end{tabular} \(\mathrm{ADD}(T;A)\) in \(\mathrm{ASD}(T;2)\) & \(\mathrm{ADD}(K;A)\) in \(\mathrm{ASD}(K;2)\) \\ \([\sqrt{2},(\frac{2^{\left\lfloor\sqrt{2}\right\rfloor}}{2},\frac{4+\sqrt{10}}{2} )]\times 2\) & \([\sqrt{2},(\frac{2^{\left\lfloor\sqrt{2}\right\rfloor}}{2},\frac{4+\sqrt{10}}{2} )]\times 2\) \\ \([2,(\frac{\sqrt{2}+\sqrt{10}}{2},\frac{\sqrt{2}+\sqrt{10}}{2})]\) & \([2,(\sqrt{2},\sqrt{10})]\) \\ \([\sqrt{10},(\frac{2^{\left\lfloor\sqrt{2}\right\rfloor}}{2},\frac{4+\sqrt{2}}{2} )]\times 2\) & \([\sqrt{10},(\frac{2^{\left\lfloor\sqrt{2}\right\rfloor}}{2},\frac{4+\sqrt{2}}{2} )]\times 2\) \\ \([4,(\frac{\sqrt{2}+\sqrt{10}}{2},\frac{\sqrt{2}+\sqrt{10}}{2})]\) & \([4,(\frac{\sqrt{2}+\sqrt{10}}{2},\frac{\sqrt{2}+\sqrt{10}}{2})]\) \\ \hline \end{tabular} \(\mathrm{SDM}_{1}=\frac{3+\sqrt{2}+\sqrt{10}}{3}\) & \(\mathrm{SDM}_{1}=\frac{3+\sqrt{2}+\sqrt{10}}{3}\) \\ \(\mathrm{SDM}_{2}=\frac{6+2\sqrt{2}+4\sqrt{10}}{12}\) & \(\mathrm{SDM}_{2}=\frac{8+5\sqrt{2}+3\sqrt{10}}{12}\) \\ \(\mathrm{SDM}_{3}=\frac{16+4\sqrt{2}+4\sqrt{10}}{12}\) & \(\mathrm{SDM}_{3}=\frac{16+4\sqrt{2}+4\sqrt{10}}{12}\) \\ \end{tabular}
\end{table}
Table 1: The Simplexwise Distance Distributions from Definition 3.3 for the 4-point clouds \(T,K\subset\mathbb{R}^{2}\) in Fig. 1. The symbol \(\times 2\) indicates a doubled \(\mathrm{RDD}\). The three bottom rows show coordinates of \(\mathrm{SDM}(C;2,1)\in\mathbb{R}^{3}\) from Definition 3.4 for \(h=2\), \(l=1
**Definition 3.9** (Earth Mover's Distance on distributions).: _Let \(B=\{B_{1},\ldots,B_{k}\}\) be a finite unordered set of objects with weights \(w(B_{i})\), \(i=1,\ldots,k\). Consider another set \(D=\{D_{1},\ldots,D_{l}\}\) with weights \(w(D_{j})\), \(j=1,\ldots,l\). Assume that a distance between \(B_{i},D_{j}\) is measured by a metric \(d(B_{i},D_{j})\). A flow from \(B\) to \(D\) is a \(k\times l\) matrix whose entry \(f_{ij}\in[0,1]\) represents a partial flow from an object \(B_{i}\) to \(D_{j}\). The Earth Mover's Distance [55] is the minimum of \(\operatorname{EMD}(B,D)=\sum\limits_{i=1}^{k}\sum\limits_{j=1}^{l}f_{ij}d(B _{i},D_{j})\) over \(f_{ij}\in[0,1]\) subject to \(\sum\limits_{j=1}^{l}f_{ij}\leq w(B_{i})\) for \(i=1,\ldots,k\), \(\sum\limits_{i=1}^{k}f_{ij}\leq w(D_{j})\) for \(j=1,\ldots,l\), and \(\sum\limits_{i=1}^{k}\sum\limits_{j=1}^{l}f_{ij}=1\)._
The first condition \(\sum\limits_{j=1}^{l}f_{ij}\leq w(B_{i})\) means that not more than the weight \(w(B_{i})\) of the object \(B_{i}\) 'flows' into all \(D_{j}\) via the flows \(f_{ij}\), \(j=1,\ldots,l\). The second condition \(\sum\limits_{i=1}^{k}f_{ij}\leq w(D_{j})\) means that all flows \(f_{ij}\) from \(B_{i}\) for \(i=1,\ldots,k\) 'flow' to \(D_{j}\) up to its weight \(w(D_{j})\). The last condition \(\sum\limits_{i=1}^{k}\sum\limits_{j=1}^{l}f_{ij}=1\) forces all \(B_{i}\) to collectively 'flow' into all \(D_{j}\). \(\operatorname{LAC}\)[26] and \(\operatorname{EMD}\)[55] can be computed in a near cubic time in the sizes of given sets of objects.
Theorems 3.10(c) and 4.7 will extend \(O(m^{1.5}\log^{n}m)\) algorithms for fixed clouds of \(m\) unlabeled points in [22, Theorem 6.5] to the harder case of isometry classes but keep the polynomial time in \(m\) for a fixed dimension \(n\). All complexities are for a random-access machine (RAM) model.
**Theorem 3.10** (invariance and continuity of \(\operatorname{SDDs}\)).: _(a) For \(h\geq 1\) and any cloud \(C\) of \(m\) unlabeled points in a metric space, \(\operatorname{SDD}(C;h)\) is an isometry invariant, which can be computed in time \(O(m^{h+1}/(h-1)!)\). For any \(l\geq 1\), the invariant \(\operatorname{SDM}(C;h,l)\in\mathbb{R}^{m+\frac{h(h-3)}{2}}\) has the same time._
_For any \(m\)-point clouds \(C,C^{\prime}\) in their own metric spaces and \(h\geq 1\), let the Simplexwise Distance Distributions \(\operatorname{SDD}(C;h)\) and \(\operatorname{SDD}(C^{\prime};h)\) consist of \(k=\binom{m}{h}\operatorname{RDD}\)s with equal weights \(\frac{1}{k}\) without collapsing identical \(\operatorname{RDD}\)s. (b) Using the \(k\times k\) matrix of costs computed by the metric \(M_{\infty}\) between \(\operatorname{RDD}\)s from \(\operatorname{SDD}(C;h)\) and \(\operatorname{SDD}(C^{\prime};h)\), the Linear Assignment Cost \(\operatorname{LAC}\) from Definition 3.8 satisfies all metric axioms on \(\operatorname{SDD}\)s and can be computed in time \(O(h!(h^{2}+m^{1.5}\log^{h}m)k^{2}+k^{3}\log k)\). (c) Let \(\operatorname{SDD}(C;h)\) and \(\operatorname{SDD}(C^{\prime};h)\) have a maximum size \(l\leq k\) after collapsing identical \(\operatorname{RDD}\)s. Then \(\operatorname{EMD}\) from Definition 3.9 satisfies all metric axioms on \(\operatorname{SDD}\)s and is computed in time \(O(h!(h^{2}+m^{1.5}\log^{h}m)l^{2}+l^{3}\log l)\). (d) Let \(C^{\prime}\) be obtained from \(C\) by perturbing each point within its \(\varepsilon\)-neighborhood. For any \(h\geq 1\), \(\operatorname{SDD}(C;h)\) changes by at most \(2\varepsilon\) in the \(\operatorname{LAC}\) and \(\operatorname{EMD}\) metrics. The lower bound holds: \(\operatorname{EMD}\bigl{(}\operatorname{SDD}(C;h),\operatorname{SDD}(C^{\prime} ;h)\bigr{)}\geq|\operatorname{SDM}(C;h,1)-\operatorname{SDM}(C^{\prime};h,1) |_{\infty}\).
Theorem 3.10(d) substantially generalizes the fact that perturbing two points in their \(\varepsilon\)-neighborhoods changes the Euclidean distance between these points by at most \(2\varepsilon\).
We conjecture that \(\operatorname{SDD}(C;h)\) is a complete isometry invariant of a cloud \(C\subset\mathbb{R}^{n}\) for some \(h\geq n-1\). [39, section 4] shows that \(\operatorname{SDD}(C;2)\) distinguished all infinitely many known pairs [51, Fig. S4] of non-isometric \(m\)-point clouds \(C,C^{\prime}\subset\mathbb{R}^{3}\) with identical \(\operatorname{PDD}(C)=\operatorname{SDD}(C;1)\).
## 4 Simplexwise Centered Distribution (SCD)
While all constructions of section 3 hold in any metric space, this section develops faster continuous metrics for complete isometry invariants of unlabeled clouds in \(\mathbb{R}^{n}\).
The Euclidean structure of \(\mathbb{R}^{n}\) allows us to translate the _center of mass_\(\frac{1}{m}\sum\limits_{p\in C}p\) of a given \(m\)-point cloud \(C\subset\mathbb{R}^{n}\) to the origin \(0\in\mathbb{R}^{n}\). Then Problem 1.1 reduces to only rotations around \(0\) from the orthogonal group \(\operatorname{O}(\mathbb{R}^{n})\).
Though the center of mass is uniquely determined by any cloud \(C\subset\mathbb{R}^{n}\) of unlabeled points, real applications may offer one or several labeled points of \(C\) that substantially speed up metrics on invariants. For example, an atomic neighborhood in a solid material is a cloud \(C\subset\mathbb{R}^{3}\) of atoms around a central atom, which may not be the center of mass of \(C\), but is suitable for all methods below.
This section studies metrics on complete invariants of \(C\subset\mathbb{R}^{n}\) up to rotations around the origin \(0\in\mathbb{R}^{n}\), which may or may not belong to \(C\) or be its center of mass.
For any subset \(A=\{p_{1},\ldots,p_{n-1}\}\subset C\), the distance matrix \(D(A\cup\{0\})\) from Definition 3.1 has size \((n-1)\times(n-1)\) and its last column can be chosen to include the distances from \(n-1\) points of \(A\) to the origin at \(0\in\mathbb{R}^{n}\).
Any \(n\) vectors \(v_{1},\ldots,v_{n}\in\mathbb{R}^{n}\) can be written as columns in the \(n\times n\) matrix whose determinant has a _sign_\(\pm 1\) or \(0\) if \(v_{1},\ldots,v_{n}\) are linearly dependent. Any permutation \(\xi\in S_{n}\) on indices \(1,\ldots,n\) is a composition of some \(t\) transpositions \(i\leftrightarrow j\) and has \(\operatorname{sign}(\xi)=(-1)^{t}\).
**Definition 4.1** (Simplexwise Centered Distribution \(\operatorname{SCD}\)).: _Let \(C\subset\mathbb{R}^{n}\) be any cloud of \(m\) unlabeled points. For any ordered subset \(A\) of points \(p_{1},\ldots,p_{n-1}\in C\), the matrix \(R(C;A)\) from Definition 3.1 has a column of Euclidean distances \(|q-p_{1}|,\ldots,|q-p_{n-1}|\). At the bottom of this column, add the distance \(|q-0|\) to the origin and the sign of the determinant of the \(n\times n\) matrix consisting of the vectors \(q-p_{1},\ldots,q-p_{n-1},q\). The resulting \((n+1)\times(m-n+1)\)-matrix with signs in the bottom \((n+1)\)-st row is the oriented relative distance matrix \(M(C;A\cup\{0\})\)._
_Any permutation \(\xi\in S_{n-1}\) of \(n-1\) points of \(A\) acts on \(D(A)\), permutes the first \(n-1\) rows of \(M(C;A\cup\{0\})\) and multiplies every sign in the \((n+1)\)-st row by \(\operatorname{sign}(\xi)\)._
_The Oriented Centered Distribution \(\operatorname{OCD}(C;A)\) is the equivalence class of pairs \([D(A\cup\{0\}),M(C;A\cup\{0\})]\) considered up to permutations \(\xi\in S_{n-1}\) of points of \(A\)._
_The Simplexwise Centered Distribution \(\operatorname{SCD}(C)\) is the unordered set of the distributions \(\operatorname{OCD}(C;A)\) for all \(\binom{m}{n-1}\) unordered \((n-1)\)-point subsets \(A\subset C\). The mirror image \(\operatorname{SCD}(C)\) is obtained from \(\operatorname{SCD}(C)\) by reversing signs._
Definition 4.1 needs no permutations for any \(C\subset\mathbb{R}^{2}\) as \(n-1=1\). Columns of \(M(C;A\cup\{0\})\) can be lexicographically ordered without affecting the metric in Lemma 4.6. Some of the \(\binom{m}{n-1}\)\(\operatorname{OCDs}\) in \(\operatorname{SCD}(C)\) can be identical as in Example 4.2(b). If we collapse any \(l>1\) identical \(\operatorname{OCDs}\) into a single \(\operatorname{OCD}\) with the _weight_\(l/\binom{m}{h}\), \(\operatorname{SCD}\) can be considered as a weighted probability distribution of \(\operatorname{OCDs}\).
**Example 4.2** (\(\operatorname{SCD}\) for clouds in Fig. 3).: _(a) Let \(R\subset\mathbb{R}^{2}\) consist of the vertices \(p_{1}=(0,0)\), \(p_{2}=(4,0)\), \(p_{3}=(0,3)\) of the right-angled triangle in Fig. 3 (middle). Though \(p_{1}=(0,0)\) is included in \(R\) and is not its center of mass, \(\operatorname{SCD}(R)\) still makes sense. In \(\operatorname{OCD}(R;p_{1})=[0,\left(\begin{array}{cc}4&3\\ 4&3\\ 0&0\end{array}\right)]\), the matrix \(D(\{p_{1},0\})\) is \(|p_{1}-0|=0\), the top row has \(|p_{2}-p_{1}|=4\), \(|p_{3}-p_{1}|=3\). In \(\operatorname{OCD}(R;p_{2})=[4,\left(\begin{array}{cc}4&5\\ 0&3\\ 0&-\end{array}\right)]\), the first row has \(|p_{1}-p_{2}|=4\), \(|p_{3}-p_{2}|=5\), the second row has \(|p_{1}-0|=0\), \(|p_{3}-0|=3\), \(\det\left(\begin{array}{cc}-4&0\\ 3&3\end{array}\right)<0\). In \(\operatorname{OCD}(R;p_{3})=[3,\left(\begin{array}{cc}3&5\\ 0&4\\ 0&+\end{array}\right)]\), the first row has \(|p_{1}-p_{3}|=3\), \(|p_{2}-p_{3}|=5\), the second row has \(|p_{1}-0|=0\), \(|p_{2}-0|=4\), \(\det\left(\begin{array}{cc}4&4\\ -3&0\end{array}\right)>0\). So \(\operatorname{SCD}(R)\) consists of the three Oriented Centered Distributions \(\operatorname{OCDs}\) above._
_If we reflect \(R\) with respect to the \(x\)-axis, the new cloud \(\bar{R}\) of the points \(p_{1},p_{2},\bar{p}_{3}=(0,-3)\) has \(\operatorname{SCD}(\bar{R})=\overline{\operatorname{SCD}}(R)\) with \(\operatorname{OCD}(\bar{R};p_{1})=\operatorname{OCD}(R)\), \(\operatorname{OCD}(\bar{R};p_{2})=[4,\left(\begin{array}{cc}4&5\\ 0&3\\ 0&+\end{array}\right)]\), \(\operatorname{OCD}(R;\bar{p}_{3})=[3,\left(\begin{array}{cc}3&5\\ 0&4\\ 0&-\end{array}\right)]\) whose signs changed under reflection, so \(\operatorname{SCD}(R)\neq\operatorname{SCD}(\bar{R})\)._
_(b) Let \(S\subset\mathbb{R}^{2}\) consist of \(m=4\) points \((\pm 1,0),(0,\pm 1)\) that are vertices of the square in Fig. 3 (right). The center of mass is \(0\in\mathbb{R}^{2}\) and has a distance \(1\) to each point of \(S\)._
_For each 1-point subset \(A=\{p\}\subset S\), the distance matrix \(D(A\cup\{0\})\) on two points is the single number \(1\). The matrix \(M(S;A\cup\{0\})\) has \(m-n+1=3\) columns. For \(p_{1}=(1,0)\), we have \(M(S;\left(\begin{array}{cc}p_{1}\\ 0\end{array}\right))=\left(\begin{array}{ccc}\sqrt{2}&\sqrt{2}&2\\ 1&1&1\\ -&+&0\end{array}\right)\), where the columns are ordered according to \(p_{2}=(0,-1)\), \(p_{3}=(0,1)\), \(p_{4}=(-1,0)\) in Fig. 3 (right). The sign in the bottom right corner is 0 because the points \(p_{1},0,p_{4}\) are in a straight line. Due to the rotational symmetry, \(M(S;\{p_{i},0\})\) is independent of \(i=1,2,3,4\). So \(\operatorname{SCD}(S)\) can be considered as one \(\operatorname{OCD}=[1,M(S;\left(\begin{array}{cc}p_{1}\\ 0\end{array}\right))]\) of weight 1._
Example 4.2(b) illustrates the key discontinuity challenge: if \(p_{4}=(-1,0)\) is perturbed, the corresponding sign can discontinuously change to \(+1\) or \(-1\). To get a continuous metric on \(\operatorname{OCDs}\), we will multiply each sign by a continuous _strength_ function that vanishes for any zero sign.
**Definition 4.3** (_strength \(\sigma(A)\)_ of a simplex).: _For a set \(A\) of \(n+1\) points \(q=p_{0},p_{1},\ldots,p_{n}\) in \(\mathbb{R}^{n}\), let \(p(A)=\frac{1}{2}\sum\limits_{i\neq j}^{n+1}|p_{i}-p_{j}|\) be half of the sum of all pairwise distances. Let \(V(A)\) denote the volume the \(n\)-dimensional simplex on the set \(A\). Define the strength \(\sigma(A)=V^{2}(A)/p^{2n-1}(A)\)._
_For \(n=2\) and a triangle \(A\) with sides \(a,b,c\) in \(\mathbb{R}^{2}\), Heron's formula gives \(\sigma(A)=\frac{(p-a)(p-b)(p-c)}{p^{2}}\), \(p=\frac{a+b+c}{2}=p(A)\) is the half-perimeter of \(A\)._
For \(n=1\) and a set \(A=p_{0},p_{1}\subset\mathbb{R}\), the volume is \(V(A)=|p_{0}-p_{1}|=2p(A)\), so \(\sigma(A)=2|p_{0}-p_{1}|\).
The strength \(\sigma(A)\) depends only on the distance matrix \(D(A)\) from Definition 3.1, so the notation \(\sigma(A)\) is used only for brevity. In any \(\mathbb{R}^{n}\), the squared volume \(V^{2}(A)\) is expressed by the Cayley-Menger determinant [61] in pairwise distances between points of \(A\). Importantly, the strength \(\sigma(A)\) vanishes when the simplex on a set \(A\) degenerates.
Theorem 4.7 will need the continuity of \(s\sigma(A)\), when a sign \(s\in\{\pm 1\}\) from a bottom row of \(\operatorname{ORD}\) discontinuously changes while passing through a degenerate set \(A\). The proof of the continuity of \(\sigma(A)\) in Theorem 4.4 gives an explicit upper bound for a Lipschitz constant \(c_{n}\) below.
**Theorem 4.4** (Lipschitz continuity of the strength \(\sigma\)).: _Let a cloud \(A^{\prime}\) be obtained from another \((n+1)\)-point cloud \(A\subset\mathbb{R}^{n}\) by perturbing every point within its \(\varepsilon\)-neighborhood. The strength \(\sigma(A)\) from Definition 4.3 is Lipschitz continuous so that \(|\sigma(A^{\prime})-\sigma(A)|\leq 2\varepsilon c_{n}\) for a constant \(c_{n}\)._
**Example 4.5** (strength \(\sigma(A)\) and its upper bounds).: _[_40_, Theorem 4.2]_ _proves upper bounds for the Lipschitz constant of the strength: \(c_{2}=2\sqrt{3}\), \(c_{3}\approx 0.43\), \(c_{4}\approx 0.01\), which quickly tend to 0 due to the 'curse of dimensionality'. The plots in Fig. 4 illustrate that the strength \(\sigma()\) behaves smoothly in the \(x\)-coordinate of a vertex and its derivative \(|\frac{\partial\sigma}{\partial x}|\) is much smaller than the proved bounds \(c_{n}\) above._
The strength \(\sigma(A)\) from Definition 4.3 will take care of extra signs in ORDs and allows us to prove the analogue of Lemma 3.7 for a similar time complexity with \(h=n\).
**Lemma 4.6** (metric on \(\operatorname{OCDs}\)).: _Using the strength \(\sigma\) from Definition 4.3, we consider the bottleneck distance \(W_{\infty}\) on the set of permutable \(m-n+1\) columns of \(M(C;A\cup\{0\})\) as on the set of \(m-n+1\) unlabeled points \(\bigg{(}v,\frac{s}{c_{n}}\sigma(A\cup\{0,q\})\bigg{)}\in\mathbb{R}^{n+1}\). For another \(\operatorname{OCD}^{\prime}=[D(A^{\prime}\cup\{0\});M(C^{\prime};A^{\prime} \cup\{0\})]\) and any permutation \(\xi\in S_{n-1}\) of indices \(1,\ldots,n-1\) acting on \(D(A\cup\{0\})\) and the first \(n-1\) rows of \(M(C;A\cup\{0\})\), set \(d_{o}(\xi)=\max\{L,W\}\),_
\[\text{where }L=L_{\infty}\Big{(}\xi(D(A\cup\{0\})),D(A^{\prime}\cup\{0\}) \Big{)},\]
\[W=W_{\infty}\Big{(}\xi(M(C;A\cup\{0\})),M(C^{\prime};A^{\prime}\cup\{0\}) \Big{)}.\]
_Then \(M_{\infty}(\operatorname{OCD},\operatorname{OCD}^{\prime})=\min_{\xi\in S_{n- 1}}d_{o}(\xi)\) satisfies all metric axioms on Oriented Centered Distributions (\(\operatorname{OCDs}\)) and is computed in time \(O((n-1)!(n^{2}+m^{1.5}\log^{n}m))\)._
The coefficient \(\frac{1}{c_{n}}\) normalizes the Lipschitz constant \(c_{n}\) of \(\sigma\) to \(1\) in line with changes of distances by at most \(2\varepsilon\) when points are perturbed within their \(\varepsilon\)-neighborhoods. An equality \(\operatorname{SCD}(C)=\operatorname{SCD}(C^{\prime})\) is interpreted as a bijection between unordered sets \(\operatorname{SCD}(C)\to\operatorname{SCD}(C^{\prime})\) matching all \(\operatorname{OCDs}\), which is best detected by checking if metrics in Theorem 4.7 between these \(\operatorname{SCDs}\) is 0.
**Theorem 4.7** (completeness and continuity of \(\operatorname{SCD}\)).: _(a) The Simplexwise Centered Distribution \(\operatorname{SCD}(C)\) in Definition 4.1 is a complete isometry invariant of clouds \(C\subset\mathbb{R}^{n}\) of \(m\) unlabeled points with a center of mass at the origin \(0\in\mathbb{R}^{n}\), and can be computed in time \(O(m^{n}/(n-4)!)\)._
_So any clouds \(C,C^{\prime}\subset\mathbb{R}^{n}\) are related by rigid motion (isometry, respectively) if and only if \(\operatorname{SCD}(C)=\operatorname{SCD}(C^{\prime})\) (\(\operatorname{SCD}(C)\) equals \(\operatorname{SCD}(C^{\prime})\) or its mirror image \(\operatorname{\overline{SCD}}(C^{\prime})\), respectively). For any \(m\)-point clouds \(C,C^{\prime}\subset\mathbb{R}^{n}\), let \(\operatorname{SCD}(C)\) and \(\operatorname{SCD}(C^{\prime})\) consist of \(k=\binom{m}{n-1}\)\(\operatorname{OCDs}\)._
_(b) For the \(k\times k\) matrix of costs computed by the metric \(M_{\infty}\) between \(\operatorname{OCDs}\) in \(\operatorname{SCD}(C)\) and \(\operatorname{SCD}(C^{\prime})\), \(\operatorname{LAC}\) from Definition 3.8 satisfies all metric axioms on \(\operatorname{SCDs}\) and needs time \(O((n-1)!(n^{2}+m^{1.5}\log^{n}m)k^{2}+k^{3}\log k)\)._
_(c) Let \(\operatorname{SCDs}\) have a maximum size \(l\leq k\) after collapsing identical \(\operatorname{OCDs}\). Then \(\operatorname{EMD}\) from Definition 3.9 satisfies all metric axioms on \(\operatorname{SCDs}\) and can be computed in time \(O((n-1)!(n^{2}+m^{1.5}\log^{n}m)l^{2}+l^{3}\log l)\)._
_(d) Let \(C^{\prime}\) be obtained from a cloud \(C\subset\mathbb{R}^{n}\) by perturbing each point within its \(\varepsilon\)-neighborhood. Then \(\operatorname{SCD}(C)\) changes by at most \(2\varepsilon\) in the \(\operatorname{LAC}\) and \(\operatorname{EMD}\) metrics._
If we estimate \(l\leq k=\binom{m}{n-1}=m(m-1)\ldots(m-n+2)/n!\) as \(O(m^{n-1}/n!)\), Theorem 4.7(b,c) gives time \(O(n(m^{n-1}/n!)^{3}\log m)\) for metrics on \(\operatorname{SCDs}\), which is \(O(m^{3}\log m)\) for \(n=2\), and \(O(m^{6}\log m)\) for \(n=3\).
Though the above time estimates are very rough upper bounds, the time \(O(m^{3}\log m)\) in \(\mathbb{R}^{2}\) is faster than the only past time \(O(m^{5}\log m)\) for comparing \(m\)-point clouds by the Hausdorff distance minimized over isometries [17].
**Definition 4.8** (Centered Distance Moments \(\operatorname{CDM}\)).: _For any \(m\)-point cloud \(C\subset\mathbb{R}^{n}\), let \(A\subset C\) be a subset of \(n-1\) unordered points. The Centered Interpoint Distance list \(\operatorname{CID}(A)\) is the increasing list of all \(\frac{(n-1)(n-2)}{2}\) pairwise distances between points of \(A\), followed by \(n-1\) increasing distances from \(A\) to the origin \(0\). For each column of the \((n+1)\times(m-n+1)\) matrix \(M(C;A\cup\{0\})\) in Definition 4.1, compute the average of the first \(n-1\) distances. Write these averages in increasing order, append the list of increasing distances \(|q-0|\) from the \(n\)-th row of \(M(C;A\cup\{0\})\), and also append the vector of increasing values of \(\frac{s}{c_{n}}\sigma(A\cup\{0\})\) taking signs \(s\) from the \((n+1)\)-st row of \(M(C;A\cup\{0\})\). Let \(\vec{M}(C;A)\in\mathbb{R}^{3(m-n+1)}\) be the final vector._
_The pair \([\operatorname{CID}(A);\vec{M}(C;A)]\) is the Average Centered \(\operatorname{Vector}\operatorname{ACV}(C;A)\) considered as a vector of length \(\frac{n(n-1)}{2}+3(m-n+1)\). The unordered set of \(\operatorname{ACV}(C;A)\) for all \(\binom{m}{n-1}\) unordered subsets \(A\subset C\) is the Average Centered Distribution \(\operatorname{ACD}(C)\). The Centered Distance Moment \(\operatorname{CDM}(C;l)\) is the \(l\)-th (standardized for \(l\geq 3\)) moment of \(\operatorname{ACD}(C)\) considered as a probability distribution of \(\binom{m}{n-1}\) vectors, separately for each coordinate._
Figure 4: The strength \(\sigma\) (solid curve) and its derivative \(\frac{\partial\sigma}{\partial x}\) (dashed curve) in the \(x\)-coordinate of a point from \(A\) were averaged over 3000 random triangles (top) and tetrahedra (bottom).
**Example 4.9** (\(\mathrm{CDM}\) for clouds in Fig. 3).: _(a) For \(n=2\) and the cloud \(R\subset\mathbb{R}^{2}\) of \(m=3\) vertices \(p_{1}=(0,0)\), \(p_{2}=(4,0)\), \(p_{3}=(0,3)\) of the right-angled triangle in Fig. 3 (middle), we continue Example 4.2(a) and flatten \(\mathrm{OCD}(R;p_{1})=[0,\left(\begin{array}{cc}4&3\\ 4&3\\ 0&0\end{array}\right)]\) into the vector \(\mathrm{ACV}(R;p_{1})=[0;3,4;3,4;0,0]\) of length \(\frac{n(n-1)}{2}+3(m-n+1)=7\), whose four parts (\(1+2+2+2=7\)) are in increasing order, similarly for \(p_{2},p_{3}\). The Average Centered Distribution can be written as a \(3\times 7\) matrix with unordered rows: \(\mathrm{ACD}(R)=\left(\begin{array}{ccccc}0&3&4&3&4&0&0\\ 4&4&5&0&3&0&-6/c_{2}\\ 3&3&5&0&4&0&6/c_{2}\end{array}\right)\). The area of the triangle on \(R\) equals \(6\) and can be normalized by \(c_{2}=2\sqrt{3}\) to get \(6/c_{2}=\sqrt{3}\), see [40, section 4]. The 1st moment is \(\mathrm{CDM}(R;1)=\frac{1}{3}(7;10,14;3,11;0)\)._
_(b) For \(n=2\) and the cloud \(S\subset\mathbb{R}^{2}\) of \(m=4\) vertices of the square in Fig. 3 (right), Example 4.2(a) computed \(\mathrm{SCD}(R)\) as one \(\mathrm{OCD}=[1,\left(\begin{array}{ccccc}\sqrt{2}&\sqrt{2}&2\\ 1&1&1&1\\ -&+&0\end{array}\right)]\), which flattens to \(\mathrm{ACV}=(1;\sqrt{2},\sqrt{2},2;1,1,1;-\frac{1}{2},\frac{1}{2},0)=\mathrm{ ACD}(S)=\mathrm{CDM}(S;1)\in\mathbb{R}^{10}\), where \(\frac{1}{2}\) is the area of the triangle on the vertices \((0,0),(1,0),(0,1)\)._
**Corollary 4.10** (time for continuous metrics on \(\mathrm{CDMs}\)).: _For any cloud \(C\subset\mathbb{R}^{n}\) of \(m\) unlabeled points, the Centered Distance Moment \(\mathrm{CDM}(C;l)\) in Definition 4.8 is computed in time \(O(m^{n}/(n-4)!)\). The metric \(L_{\infty}\) on \(\mathrm{CDMs}\) needs \(O(n^{2}+m)\) time and \(\mathrm{EMD}\big{(}\mathrm{SCD}(C),\mathrm{SCD}(C^{\prime})\big{)}\geq| \mathrm{CDM}(C;1)-\mathrm{CDM}(C^{\prime};1)|_{\infty}\) holds._
## 5 Experiments and discussion of future work
This paper advocates a scientific approach to any data exemplified by Problem 1.1, where rigid motion on clouds can be replaced by another equivalence of other data. The scientific principles such as axioms should be always respected. Only the first coincidence axiom in (1.1b) guarantees no duplicate data. If the triangle inequality fails with any additive error, results of clustering can be pre-determined [52].
The notorious \(m!\) challenge of \(m\)_unlabeled_ points in Problem 1.1 was solved in \(\mathbb{R}^{n}\) by Theorem 4.7, also up to rigid motion by using the novel _strength_ of a simplex to smooth signs of determinants due to hard Theorem 4.4.
The results above sufficiently justify re-focusing future efforts from experimental attempts at Problem 1.1 to higher level tasks such as predicting properties of rigid objects, e.g. crystalline materials, using the complete invariants with _no false negatives_ and _no false positives_ for all possible data since _no experiments_ can beat the proved 100% guarantee.
To tackle the limitation of comparing only clouds having a fixed number \(m\) of points, the Earth Mover's Distance (\(\mathrm{EMD}\)) continuously can compare any distributions (\(\mathrm{SDD}\) or \(\mathrm{SCD}\)) of different sizes. Using \(\mathrm{EMD}\) instead of the bottleneck distance \(W_{\infty}\) on \(m-h\) (or \(m-n+1\)) columns of matrices in Definitions 3.3 and 4.1 increases a time from \(O(m^{1.5}\log m)\) to \(O(m^{3}\log m)\) but the total time remains the same due to a near cubic time in the last step.
The running time in real applications is smaller for several reasons. First, the shape (isometry class) of any rigid body in \(\mathbb{R}^{3}\) is determined by only \(m=4\) labeled points in general position. Even when points are unlabeled, dozens of corners or feature points suffice to represent a rigid shape well enough. Second, the key size \(l\) (number of distinct Oriented Centered Distributions) in Theorem 4.7 is often smaller than \(m\), especially for symmetric objects, see \(l=1<m=4\) in Example 4.2. The \(\mathrm{SCD}\) invariants are on top of others due to their completeness and continuity.
The past work [72, 73] used the simpler Pointwise Distance Distribution (PDD) to complete 200B+ pairwise comparisons of all 660K+ periodic crystals in the world's largest database of real materials. This experiment took only a couple of days on a modest desktop and established the _Crystal Isometry Principle_ saying that any real periodic crystal is uniquely determined by the geometry of its atomic centers without chemical elements. So the type of any atom is provably reconstructable from distances to atomic neighbors.
The new invariants allow us to go deeper and compare atomic clouds from higher level periodic crystals. Fig. 5 visualizes all 300K+ atomic clouds extracted from all 10K+ crystalline drugs in the Cambridge Structural Database (CSD) by using \(\mathrm{SDV}\) invariants for \(5+1\) atoms including the central one. Future maps will use stronger invariants.
This research was supported by the Royal Academy Engineering fellowship IF2122/186, and EPSRC grants EP/R018472/1, EP/X018474/1. We thank all members of the Data Science Theory and Applications group at Liverpool (UK) and all reviewers for their helpful suggestions.
Figure 5: Two principal directions of \(\mathrm{SDVs}\) for all 300K+ atomic clouds from all 10K+ drugs in the CSD, colored by 25 elements. | ```
rígidな構造物、例えば自動車や他の固体オブジェクトは、しばしばラベルのない点を finite クラウドとして表されます。これらの点クラウドにおける最も自然な等価物は、全ての間の点距離を維持する剛体運動または等位性です。点クラウドの剛性パターンは、完全に等位性保存的なInvariantのみで信頼性のある比較が可能で、これは等価性保存的な特徴量とも呼ばれ、誤って否定(等位性の異なるクラウドに対して同じ記述)を避けることができます。また、誤って陽性(等位性の同じクラウドであっても同じ記述)を避けることができます。データにおけるノイズと動きは、点の微小変動に対するInvariantの探索を促します。私たちは、ラベルのないクラウドの連続的な、かつ完全なInvariantを、 euclidean 空間における、初めの提案です。特定の次元では、このInvariantのための新しいメトリックは、点の数に対して多項 |
2307.08145 | Self-Attention Based Generative Adversarial Networks For Unsupervised
Video Summarization | In this paper, we study the problem of producing a comprehensive video
summary following an unsupervised approach that relies on adversarial learning.
We build on a popular method where a Generative Adversarial Network (GAN) is
trained to create representative summaries, indistinguishable from the
originals. The introduction of the attention mechanism into the architecture
for the selection, encoding and decoding of video frames, shows the efficacy of
self-attention and transformer in modeling temporal relationships for video
summarization. We propose the SUM-GAN-AED model that uses a self-attention
mechanism for frame selection, combined with LSTMs for encoding and decoding.
We evaluate the performance of the SUM-GAN-AED model on the SumMe, TVSum and
COGNIMUSE datasets. Experimental results indicate that using a self-attention
mechanism as the frame selection mechanism outperforms the state-of-the-art on
SumMe and leads to comparable to state-of-the-art performance on TVSum and
COGNIMUSE. | Maria Nektaria Minaidi, Charilaos Papaioannou, Alexandros Potamianos | 2023-07-16T19:56:13 | http://arxiv.org/abs/2307.08145v1 | # Self-Attention Based Generative Adversarial Networks For Unsupervised Video Summarization
###### Abstract
In this paper, we study the problem of producing a comprehensive video summary following an unsupervised approach that relies on adversarial learning. We build on a popular method where a Generative Adversarial Network (GAN) is trained to create representative summaries, indistinguishable from the originals. The introduction of the attention mechanism into the architecture for the selection, encoding and decoding of video frames, shows the efficacy of self-attention and transformer in modeling temporal relationships for video summarization. We propose the SUM-GAN-AED model that uses a self-attention mechanism for frame selection, combined with LSTMs for encoding and decoding. We evaluate the performance of the SUM-GAN-AED model on the SumMe, TVSum and COGNIMUSE datasets. Experimental results indicate that using a self-attention mechanism as the frame selection mechanism outperforms the state-of-the-art on SumMe and leads to comparable to state-of-the-art performance on TVSum and COGNIMUSE.
Unsupervised Video Summarization, Generative Adversarial Networks, key-frame extraction, Long Short-Term Memory, Deep Neural Networks
## I Introduction
The amount of video data produced on a daily basis is growing at an exponential rate. Given this growth, users increasingly require assistance for selecting, browsing and consuming such extensive collections of videos. Video summarization aims to provide a short visual summary of an original, full-length video, that encapsulates the flow of the story and the most important segments of the video. The goal is for the produced summary to retain only the significant parts and contain as little unnecessary content as possible [1].
Several methods have been proposed to tackle video summarization using information extracted from the audio, video and text modalities.
Early approaches rely on the statistical processing of low-level video, audio and text features for assessing frame similarity or performing clustering-based key-frame selection [2], while the detection of the salient parts of the video is achieved using motion descriptors, color histograms and eigen-features.
Given the recent growth of neural network architectures, many deep learning based video summarization frameworks have been proposed over the last years. Deep-learning based video summarization algorithms typically represent visual content as feature vector encodings of video frames extracted from Convolutional Neural Networks (CNNs) [1]. One of the challenges in video summarization is learning the complex temporal dependencies among the video frames. Early temporal modeling approaches used Long Short-Term Memory (LSTM) units [3], or in general, sequence-to-sequence models such as Recurrent Neural Networks (RNNs) [4, 5]. The introduction of transformers allowed for parallel computation, as well as, better modeling of the long-range temporal dependencies among the video frames [6]. Generative Adversarial Networks (GANs) [7] have also been used in video summarization algorithms. In [8], an adversarial framework is proposed consisting of a summarizer and a discriminator, both of which were based on LSTMs. GAN-based video summarization algorithms have been shown to produce state-of-the-art results [1]. Recently attention mechanisms have appeared to be effective for identifying the important parts of videos [1, 9, 10, 11].
In this work, we tackle video summarization as a key-segment selection problem. We adopt a GAN-based video summarization approach and build upon the SUM-GAN model proposed in [8].
Motivated by the limited memorization, long-range and temporal modeling capacity of the LSTM, as well as the success of the multi-head attention and transformer architectures in overcoming these drawbacks [6, 12, 13], we extend SUM-GAN by integrating attention mechanisms in several parts of the architecture. We perform an ablation study to identify the importance of better temporal modeling in the frame selection, encoder and decoder of SUM-GAN. We propose the SUM-GAN-AED model that relies on a pure attentional mechanism as the frame selector, while it retains the LSTM module as the encoder and decoder.
Our contributions include: 1) investigation of the efficacy of the integration of transformers into different parts of the SUM-GAN architecture, namely using a transformer for the frame selector (SUM-GAN-SAT), the encoder (SUM-GAN-STD, SUM-GAN-STSED) and the encoder-decoder (SUM-GAN-SAT, SUM-GAN-ST), 2) based on the above results, we propose the use of a self-attention mechanism for the frame selection, while retaining the LSTM architecture for the encoder and the decoder (SUM-GAN-AED), 3) the evaluation of the proposed models on the SumMe, TVSum and COGNIMUSE datasets, achieving state-of-the-art methods performance on | この論文では、非 supervisedなアプローチを用いて、包括的な動画要約を生成するという問題を研究しています。私たちは、代表的な要約を作成するために、Generative Adversarial Network (GAN) を訓練し、オリジナルと区別ができないようにしています。動画フレームの選択、エンコーディング、デコーディングのための注意機構の導入により、自己注意と Transformer が動画要約における時系列関係をモデル化する有効性を示しています。SUM-GAN-AED モデルを提案し、フレーム選択のための自己注意機構を組み合わせて、LSTM を使用したエンコーディングとデコーディングを行います。SUM-GAN-AED モデルの性能をSumMe、TVSum、COGNIMUSE データセットで評価しました。実験の結果、フレーム選択のために自己注意機構を使用すると、SumMe で既存の最先端を超えています。また、TVSum と COGNIMUSE での既存の最先端と比較して、性能がComparable な結果となりました |
2304.11144 | Multifractal Properties of Tribonacci Chains | We introduce two 1D tight-binding models based on the Tribonacci
substitution, the hopping and on-site Tribonacci chains, which generalize the
Fibonacci chain. For both hopping and on-site models, a perturbative real-space
renormalization procedure is developed. We show that the two models are
equivalent at the fixed point of the renormalization group flow, and that the
renormalization procedure naturally gives the Local Resonator Modes.
Additionally, the Rauzy fractal, inherent to the Tribonacci substitution, is
shown to serve as the analog of conumbering for the Tribonacci chain. The
renormalization procedure is used to repeatedly subdivide the Rauzy fractal
into copies of itself, which can be used to describe the eigenstates in terms
of Local Resonator Modes. Finally, the multifractal dimensions of the energy
spectrum and eigenstates of the hopping Tribonacci chain are computed, from
which it can be concluded that the Tribonacci chains are critical. | Julius Krebbekx, Anouar Moustaj, Karma Dajani, Cristiane Morais Smith | 2023-04-21T17:40:46 | http://arxiv.org/abs/2304.11144v2 | # Multifractal Properties of Tribonacci Chains
###### Abstract
We introduce two 1D tight-binding models based on the Tribonacci substitution, the hopping and on-site Tribonacci chains, which generalize the Fibonacci chain. For both hopping and on-site models, a perturbative real-space renormalization procedure is developed. We show that the two models are equivalent at the fixed point of the renormalization group flow, and that the renormalization procedure naturally gives the Local Resonator Modes. Additionally, the Rauzy fractal, inherent to the Tribonacci substitution, is shown to serve as the analog of conumbering for the Tribonacci chain. The renormalization procedure is used to repeatedly subdivide the Rauzy fractal into copies of itself, which can be used to describe the eigenstates in terms of Local Resonator Modes. Finally, the multifractal dimensions of the energy spectrum and eigenstates of the hopping Tribonacci chain are computed, from which it can be concluded that the Tribonacci chains are critical.
Aperiodic; Quasicrystal; Multifractal Spectrum; Tribonacci; Rauzy Fractal pacs: 03.65.-b, 03.65.-b, 03.65.Jb
## I Introduction
The description of electrons in solids using Bloch's theorem has allowed for a profound understanding of the electronic band structure of regular crystalline materials [1]. The discovery of quasicrystals [2], aperiodic structures that break translational symmetry, has pushed the field forward. The Penrose tilings [3] or the aperiodic mono-tile discovered recently by Smith et al. [4] are some of the typical examples that have fascinated physicists and mathematicians for years. Quasi-crystalline lattices have been also experimentally realized using different quantum-simulator platforms, such as ultracold atoms [5] or photonics [6].
The advent of topological insulators has reiterated the importance of periodicity in solids because translation invariance is at the core of the topological classification of these materials [7; 8; 9]. It remains an open question how the notion of topology translates to aperiodic structures such as quasicrystals, where translation invariance is often replaced by scale invariance [10]. The topological aspects of quasicrystals have been recently investigated [11; 12; 13; 14; 10], but methods are often tailored to each model, and a general framework to study topology in these systems is lacking.
Arguably the most investigated quasicrystal is the Fibonacci chain [15], a one-dimensional model based on the Su-Schrieffer-Heeger (SSH) model [16]. The latter is a tight-binding model in which alternating weak and strong hopping parameters lead to a topological or trivial phase, depending on whether the last bond in the chain corresponds to a weak or strong hopping, respectively. The Fibonacci chain is a natural extension of the SSH model to the aperiodic domain [17], in which the weak and strong hopping parameters are distributed according to a Fibonacci word. This 1D tight-binding chain hosts many interesting properties, such as a multifractal energy spectrum and eigenstates [18; 19; 20]. In addition, it was shown to be equivalent to the Harper model [21], from which it inherits its topological properties. In particular, a description of the system in terms of conumbers [22] has revealed hidden symmetries in Hilbert space and allowed for a systematic prediction of the influence of random disorder based on a renormalization group (RG) scheme [17]. The interpretation of the system in terms of local symmetries has also led to a more profound understanding of its physical properties [23].
In this paper, we go beyond the realm of dimerized models, such as the SSH and Fibonacci chain, and introduce a quantum chain based on the Tribonacci substitution. Two tight-binding chains, the hopping Tribonacci Chain (HTC) and the on-site Tribonacci Chain (OTC), are defined analogously to the Fibonacci chain. These chains are closely linked to the Rauzy fractal, a well-known compact domain with fractal boundary [24]. An RG scheme for the HTC and OTC is developed along the lines proposed by Niu and Nori [17]. This allows for the same interpretation of the spectrum as a multifractal set as for the Fibonacci chain [18]. The RG scheme is also used to render the HTC and OTC equivalent at the RG fixed point. We show how the Rauzy fractal orders the lattice points according to their local environment, in analogy with the conumbering scheme. Furthermore, the RG procedure provides a natural way to enumerate all structures in the Local Resonator Mode (LRM) framework [23]. Finally, we compute the multifractal dimensions of the energy spectrum and eigenstates of the HTC, and compare them with the Fibonacci chain. From these results, it can be concluded that the Tribonacci chains are critical in terms of Anderson localization.
The paper is structured as follows. In section II we introduce the HTC, the OTC, and all elements that are needed to define the model, such as the Tribonacci word and the Rauzy fractal. Section III is devoted to the RG scheme for the HTC and OTC, and how the two models can be considered equivalent in the infinite RG limit. In Section IV, the Rauzy fractal is proposed as the analog of conumbering for the HTC and OTC. Multifractal prop
erties of the spectrum and wavefunction of the HTC are computed in Section V, and compared to the Fibonacci chain. Finally, the conclusion and outlook are presented in Section VI.
## II The model
In this section, we introduce the elements needed to define the Tribonacci chain. The main element is the Tribonacci word, which determines the quasiperiodic modulation in the tight-binding chains.
### The Tribonacci Word
#### ii.1.1 The Tribonacci Sequence
Analogous to the Fibonacci sequence, one can define the Tribonacci sequence recursively as
\[T_{N+1}=T_{N}+T_{N-1}+T_{N-2}, \tag{1}\]
with initial values \(T_{-2}=0,T_{-1}=T_{0}=1\). The Tribonacci constant \(\beta\), the analog of the golden ratio, is obtained as the limit
\[\beta=\lim_{N\to\infty}\frac{T_{N+1}}{T_{N}}\approx 1.8392\ldots, \tag{2}\]
which is also the unique real root of the polynomial
\[P(x)=x^{3}-x^{2}-x-1. \tag{3}\]
The other two roots \(\omega,\bar{\omega}\) are complex and satisfy \(|\omega|<1\).
#### ii.1.2 The Tribonacci Substitution
The Tribonacci substitution is the substitution \(\rho\) on the alphabet \(\mathcal{A}=\{0,1,2\}\) that reads
\[\rho:\begin{cases}0\mapsto 01,\\ 1\mapsto 02,\\ 2\mapsto 0.\end{cases} \tag{4}\]
The Tribonacci word is obtained by repeatedly applying \(\rho\) to the seed \(W_{0}=0\). The resulting word after \(N\) applications \(W_{N}:=\rho^{N}(W_{0})\) is called the \(N\)th Tribonacci approximant. The Tribonacci word is the limit \(W:=\lim_{N\to\infty}W_{N}\). The first few approximants read
\[W_{0} =0,\] \[W_{1} =01,\] \[W_{2} =0102,\] \[W_{3} =0102010,\] \[W_{4} =0102010010201.\]
An alternative way to generate the Tribonacci word is by concatenating the previous three approximants
\[W_{N+1}=W_{N}W_{N-1}W_{N-2}, \tag{5}\]
which is reminiscent of Eq. (1). Therefore, the Tribonacci constant is equivalently obtained by the limit
\[\beta=\lim_{N\to\infty}\frac{|W_{N+1}|}{|W_{N}|},\]
where \(|\cdot|\) denotes the length of the word.
Another important tool when dealing with any substitution \(\rho\) is the incidence matrix \(\mathbf{M}=[m_{ij}]\), where \(m_{ij}=|\rho(j)|_{i}\) and \(|w|_{k}\) denotes the number of occurrences of the letter \(k\) in the word \(w\). The incidence matrix is used in the relation
\[\mathbf{N}^{(N+1)}=\mathbf{M}\cdot\mathbf{N}^{(N)},\]
where \(\mathbf{N}^{(N)}:=(|W_{N}|_{0},|W_{N}|_{1},|W_{N}|_{2})^{T}\) is the vector that counts how often each letter occurs in the approximant \(W_{N}\). If \(\mathbf{M}\) has precisely one eigenvalue \(\lambda\) with \(|\lambda|>1\) and all other eigenvalues have modulus strictly less than 1, the substitution is called Pisot. The incidence matrix for the Tribonacci substitution and its characteristic polynomial read
\[\mathbf{M}=\begin{pmatrix}1&1&1\\ 1&0&0\\ 0&1&0\end{pmatrix},\qquad\det(\lambda\mathbf{I}-\mathbf{M})=\lambda^{3}- \lambda^{2}-\lambda-1, \tag{6}\]
which is identical to the Tribonacci polynomial Eq. (3). Hence, it is immediate that the Tribonacci substitution is Pisot. The eigenvalues are \(\lambda=\beta>1\) and \(\lambda=\omega,\bar{\omega}\) where \(|\omega|<1\).
One can also define the bi-infinite Tribonacci word \(W|W\) in a consistent way (see Ch. 4 of Ref. [25]). Take the seed \(\rho^{-1}(0)|0=2|0\) and apply \(\sigma=\rho^{3}\) infinitely often to the seed. This results in the approximants \(W_{3N-1}|W_{3N}\) and the limit
\[W|W:=\lim_{N\to\infty}W_{3N-1}|W_{3N}=\cdots w_{-2}w_{-1}|w_{0}w_{1}\cdots. \tag{7}\]
#### ii.1.3 The Rauzy Fractal
In 1982, Gerard Rauzy used the Tribonacci substitution to define a 2D compact domain with fractal boundary, called the Rauzy fractal [24] (see Fig. 1). The Rauzy fractal is obtained as a subset of \(\mathbb{C}\) via the valuation map. Let \([W]_{m}\) denote the first \(m\) letters of the Tribonacci word and take the left eigenvector \(\mathbf{v}=(v_{0},v_{1},v_{2})\) of \(\mathbf{M}\) in Eq. (6), corresponding to the eigenvalue \(\omega\). Then, the \(m\)th point in the Rauzy fractal is given by
\[z_{m}=E([W]_{m})=\sum_{i\in\{0,1,2\}}|[W]_{m}|_{i}v_{i}\in\mathbb{C}, \tag{8}\]
where \(E\) is the valuation map and \(m\geq 0\). Enumerating the letters of \(W=w_{0}w_{1}w_{2}\cdots\), each point can be assigned a color defined by the \(w_{m}\in\{0,1,2\}\), the \((m+1)\)th letter [26]. The Rauzy fractal is the compact set \(\mathfrak{R}=\overline{\{z_{m}\mid m\geq 0\}}\).
Another way to obtain Fig. 1 is by starting at the origin in \(\mathbb{R}^{3}\), and for each letter in \(W\), taking a unit step in the \(x,y\) or \(z\)-direction if the letter is \(0\), \(1\) or \(2\), respectively [27]. This will create a staircase in \(\mathbb{R}^{3}\) in the direction of \(\mathbf{v}_{\beta}=(\beta^{2},\beta,1)^{T}\), which spans the expanding eigenspace \(L\) of Eq. (6). Denote \(\pi_{\text{int}}\) the projection along \(\mathbf{v}_{\beta}\) onto the 2D contracting eigenspace. Then, the \(m\)th point for \(m\geq 0\) is given by
\[\mathbf{x}_{m}=\pi_{\text{int}}\sum_{i=0}^{m-1}\mathbf{e}_{w_{i}}\in\mathbb{R }^{2}, \tag{9}\]
where \(\mathbf{e}_{i}\) are the canonical basis vectors of \(\mathbb{R}^{3}\). The Rauzy fractal is the compact set \(\mathfrak{R}^{\prime}=\overline{\{\mathbf{x}_{m}\mid m\geq 0\}}\), which is not precisely \(\mathfrak{R}\), but related by an affine transformation (see Appendix A for details of this transformation).
#### ii.1.4 Cut-and-Project Sets
Bearing the Rauzy fractal \(\mathfrak{R}^{\prime}\) in mind, one can view the Tribonacci word as a quasicrystal. Consider again the Tribonacci staircase, which are the points \(\mathbf{y}_{m}=\sum_{i-0}^{m-1}\mathbf{e}_{w_{i}}\). Using the bi-infinite word \(W|W\), the staircase can also be defined for \(m<0\) by \(\mathbf{y}_{m}=\sum_{i=m}^{-1}\mathbf{e}_{w_{i}}\). From the bi-infinite staircase, one can construct a 1D Tribonacci quasicrystal
\[\Lambda_{\text{trib}}=\{\pi\mathbf{y}_{m}\mid m\in\mathbb{Z}\}, \tag{10}\]
by projecting all staircase points along the stable eigenspace onto the line spanned by \(\mathbf{v}_{\beta}\), where this projection is denoted \(\pi\).
One can see that \(\Lambda_{\text{trib}}\) is a cut-and-project set in the following sense. Take a cubic lattice in \(\mathbb{R}^{3}\) and trace out a volume by sliding the set \(\mathfrak{R}^{\prime}\), the acceptance set of the cut-and-project scheme, along the space \(L\). Note that all lattice points lying in the traced out volume are exactly the staircase points \(\mathbf{y}_{m}\), which constitute \(\Lambda_{\text{trib}}\) upon projecting onto \(L\). A key result is that any cut-and-project set has a point diffraction pattern [25], which leads to the conclusion that the aperiodic lattice \(\Lambda_{\text{trib}}\) is a quasicrystal.
Finally, we would like to point out that there exists a quasiperiodic 2D tiling, the Rauzy tiling, which is based on the Tribonacci numbers and is cut-and-project set from a 3D space [28]. Several physical properties of tight-binding models on these lattices have been studied [29; 30], in particular the effect of a magnetic field [11; 12; 31]. The generalized Rauzy tiling extends this construction to arbitrary dimension, and this family of tilings can be viewed as a generalization of the Fibonacci chain [28].
#### ii.1.5 Recurrence Properties
Another key property of the Tribonacci \(W\) word is its self-similarity [32]. Take any finite string \(s=s_{1}\cdots s_{N}\) of length \(N\) that occurs somewhere in \(W\). We say that \(s\) occurs at position \(i\) in \(W\) if \(s_{1}\cdots s_{N}=w_{i}\cdots w_{i+N}\). Let \(i_{1},i_{2},\dots\) denote the places where \(s\) occurs in \(W\). Then, the words \(r_{j}=w_{i_{j}}\cdots w_{i_{j+1}}\) between occurrences of \(s\) have useful properties. Firstly, for any choice \(s\), the word \(r_{j}\in\{r^{(0)},r^{(1)},r^{(2)}\}\) takes one of three values. Secondly, if we label \(r^{(i)}\) such that \(r^{(0)}\) occurs most often, \(r^{(1)}\) second most often and \(r^{(2)}\) least often, then the map \(\kappa:r^{i}\mapsto i\) maps the string \(r_{1}r_{2}\cdots\) back to \(W\). In other words
\[\kappa(r_{1})\kappa(r_{2})\cdots=W, \tag{11}\]
where \(r_{i}\) are the words between subsequent occurrences of \(s\) in \(W\). This also works if \(s\) occurs in a Tribonacci approximant \(W_{N}\). By applying periodic boundary conditions when determining \(r_{j}\), the map \(\kappa\) results in
\[\kappa(r_{1})\kappa(r_{2})\cdots=W_{N-k}, \tag{12}\]
where \(k\) depends on the choice of \(s\). Eqs. (11) and (12) are the foundation of the perturbative RG scheme in Section III.
We would like to emphasise that there are other quantum chains based on three-letter substitutions that are Pisot with the same dominant \(\lambda=\beta\). One such example is the system studied by Ali et al. [33]. This is fundamentally different from our work, since in their case there is not a natural RG scheme and our connection to the Rauzy fractal is entirely new.
Figure 1: (Color online) A Rauzy fractal with \(T_{14}=5768\) points. Each region corresponds to a symbol: red (0), green (1) and blue (2).
### Tribonacci Tight-Binding Models
The definition of the Tribonacci chain, with aperiodic hopping and on-site energy, generalizes the work by Niu and Nori [17] on the Fibonacci chain to the HTC and OTC.
#### ii.2.1 Hopping Model
The infinite HTC is defined as a 1D tight-binding chain with no on-site potentials and hopping parameters that are modulated according to the Tribonacci word \(W|W\). The Hamiltonian reads
\[H=\sum_{n\in\mathbb{Z}}t_{w_{n}}\left|n+1\right\rangle\left\langle n\right|+H.c., \tag{13}\]
where \(w_{n}\in\{0,1,2\}\) are the letters of \(W|W\) in Eq. (7) and the model is parameterized by one parameter \(\rho\in[0,1]\) as \(t_{0}/t_{1}=t_{1}/t_{2}=\rho\). Note that Eq. (13) possesses chiral symmetry \(\Gamma H\Gamma=-H\), where \(\Gamma^{2}=1\) and
\[\Gamma=\sum_{n\in Z}\left|2n\right\rangle\left\langle 2n\right|-\sum_{n\in Z} \left|2n+1\right\rangle\left\langle 2n+1\right|.\]
A direct consequence of chiral symmetry is a symmetric spectrum around \(E=0\). The model is studied in the regime where \(\rho\ll 1\), i.e. \(0<t_{0}\ll t_{1}\ll t_{2}\), such that there is a hierarchy of bond strengths, analogous to Ref. [17].
#### ii.2.2 On-Site Model
The OTC is defined by the Hamiltonian
\[H=\sum_{n\in\mathbb{Z}}\epsilon_{w_{n}}\left|n\right\rangle\left\langle n \right|-t\sum_{n\in\mathbb{Z}}\left|n+1\right\rangle\left\langle n\right|+H.c., \tag{14}\]
where now the hopping parameters \(t\) are constant, and the on-site potential \(\epsilon_{i}\) is modulated according to the Tribonacci word \(W|W\). This model is generally parameterized by two parameters, \(c_{1}=(\epsilon_{1}-\epsilon_{0})/t\) and \(c_{2}=(\epsilon_{2}-\epsilon_{0})/t\). Analogous to Ref. [17], we demand \(|c_{1}|,|c_{2}|,|c_{2}-c_{1}|\gg 1\), which physically means that the on-site potentials dominate and are weakly coupled. One particular choice is \(c_{1}=c_{2}/2=c\gg 1\), which will be used when comparing to the HTC.
## III Perturbative Renormalization of the Tribonacci Chain
We now present the perturbative RG scheme for the HTC and OTC. The scheme is possible due to the self-similar recurrence properties of the Tribonacci word (see Section II.1.5), and is analogous to the RG of the Fibonacci chain proposed by Niu and Nori [17] (see the review by Jagannathan [15] for more details on the Fibonacci chain).
### The Renormalization Scheme
#### iii.1.1 Hopping Model
For the RG scheme, it is convenient to consider the \(N\)th HTC approximant
\[H_{N}=\sum_{n=0}^{T_{N}-1}t_{w_{n}}\left|n+1\bmod T_{N}\right\rangle\left \langle n\right|+H.c., \tag{15}\]
where periodic boundary conditions are enforced. Furthermore, the Hamiltonian is split up in two parts
\[H_{N}=H_{0,N}+H_{1,N}, \tag{16}\]
where \(H_{1,N}\) contains only the terms with a \(t_{0}\) hopping, such that \(H_{0,N}\) can be regarded as the unperturbed Hamiltonian. Note that \(H_{0,N}\) has only five highly degenerate energy levels \(E=0,\pm t_{1},\pm t_{2}\). The \(E=0\) states are the atoms, which are isolated sites, corresponding to \(00\) in \(W\). Type-1 molecules are the \(E=\pm t_{1}\) states, corresponding to \(010\) in \(W\). These are isolated dimers consisting of two neighboring sites, coupled by a \(t_{1}\) bond, which can either bond or anti-bond. Similarly, the \(E=\pm t_{2}\) states correspond to \(020\) in \(W\), and are called type-2 molecules.
Upon setting \(t_{0}\) nonzero, the atoms/molecules start to interact. If one considers one type of atom or molecule as a lattice site, one can compute the effective coupling between subsequent sites using Brillouin-Wigner perturbation theory. Fig. 2 depicts the spectrum of Eq. (15), where one can see five branches around \(E=0,\pm t_{1},\pm t_{2}\) that would become fully degenerate upon setting \(t_{0}=0\).
Now, we explain the simplest case, the type-1 molecule, in detail. The procedure for the other bonds is exactly the same, but with longer computations. Consider the
Figure 2: (Color online) The energy spectrum of the HTC Eq. (15) with \(\rho=0.2\) and \(T_{13}=3136\) sites. The five main bands are located around \(E=0,\pm t_{1},\pm t_{2}\). The inset shows a zoom in on the top band, which exhibits a similar spectrum, but with seemingly different \(t_{0},t_{1}\).
Tribonacci approximant
\[\begin{array}{l}W_{6}=\\ 01\underline{020}100102010101020100102010201001020100102010010201010102010 2010.\end{array} \tag{17}\]
The first step is to tabulate all words \(r_{i}\) occurring between \(1\)'s in \(W_{6}\), starting after the first occurrence of \(1\), and considering periodic boundary conditions. The possibilities are \(020,00\) and \(0\), which occur \(7,4\) and \(2\) times, respectively. Therefore
\[\begin{array}{l}\{r\}=\{r_{1}=r^{(0)},r_{2}=r^{(1)},r_{3}=r^{(0)},\ldots,r_{ 13}=r^{(1)}\},\\ r^{(0)}=020,r^{(1)}=00,r^{(2)}=0.\end{array} \tag{18}\]
Finally, upon applying the map \(\kappa:r^{(i)}\mapsto i\), the Tribonacci approximant \(W_{4}\) is obtained as
\[W_{4}=\kappa(r_{1})\kappa(r_{2})\cdots\kappa(r_{13})=0102010010201, \tag{19}\]
which has \(k=2\) in Eq. (12). The procedure in Eqs. (17), (18), and (19), which is the \(s=1\) case, can be carried out for any \(s\). This is done for \(s=0,1,2,00\) in Table 1.
The procedure outlined in Eqs. (17), (18) and (19) is applied to the HTC Hamiltonian in Eq. (15) as follows. Consider the approximant in Fig. 3. Each dimer of two sites coupled by \(t_{1}\) is considered a lattice site in the renormalized chain, on which a (anti-)bonding state \(\left|\pm\right\rangle_{i}\) can sit. Using perturbation theory, the effective coupling between neighboring sites
\[t_{i}^{\prime}=\left\langle\pm\right|_{i}H_{1,N}\left|\pm\right\rangle_{i+1},\]
is computed. The perturbation theory framework is explained in Appendix B, for the chain shown in Fig. 3. The computations and results for the hopping and on-site chain are presented in Appendix B.1 and B.2, respectively.
The main result of the RG scheme can now be stated. Denote \(H_{N}^{(p,q)}\) given by Eq. (15) with \(t_{0}/t_{1}=\rho^{p},t_{1}/t_{2}=\rho^{q}\), where \(p,q\in[0,\infty)\). Setting \(t_{0}=0\), the HTC Hamiltonian \(H_{0,N}\) has \(T_{N-3},T_{N-2},T_{N-4},T_{N-2}\) and \(T_{N-3}\) states with \(E=-t_{2},E=-t_{1},E=0,E=t_{1}\) and \(E=t_{2}\), respectively. To each of these five energies, we associate an atomic (\(s=00\)), bonding or anti-bonding chain (\(s=1,2\)). The result of the perturbative calculation (see Appendix B.1) is
\[H_{N}^{(p,q)}\approx(z_{2}H_{N-3}^{(p+q,p+2q)}-t_{2})\oplus(z_{1}H_{N-2}^{(q,p )}-t_{1})\oplus(z_{0}H_{N-4}^{(p,2p+q)})\oplus(z_{1}H_{N-2}^{(q,p)}+t_{1}) \oplus(z_{2}H_{N-3}^{(p+q,p+2q)}+t_{2}), \tag{20}\]
where the parameters read \(z_{0}=\rho^{4p+2q},z_{1}=\rho^{p+q}/2\) and \(z_{2}=\rho^{2p+3q}/2\). The computation of the parameters \(z_{i}\) and the \(p,q\) exponents in each of the five blocks is identical to Ref. [17], and is repeated in detail in Appendix B.1. The HTC in Eq. (15) realizes the case \(p=q=1\). From the result Eq. (20), it is clear that the spectrum consists not simply of scaled and shifted versions of itself, but rather related spectra of chains with various \(p,q\) values. Since one can identify each of the five quasibands in Fig. 2 to a block in Eq. (20), the spectrum can be interpreted as a multifractal set as Zheng [18] did for the Fibonacci chain.
The words \(r^{(i)}\) in Table 1 are longer than those in the RG scheme for the Fibonacci chain, which requires higher orders of perturbation theory to yield a nonzero result. This has the advantage that the error made in the approximate RG Eq. (20) is smaller than the RG scheme for the Fibonacci chain.
Figure 3: The 3rd approximant HTC, with Hamiltonian \(H_{3}\) (see Eq. (15)) and periodic boundary conditions. The single line denotes a \(t_{0}\) bond, the double line a \(t_{1}\) bond and a triple line a \(t_{2}\) bond. The chain is renormalized by considering the type-1 molecules as the new lattice sites, and the chain between these molecules as the new bonds, which are \(t_{0}^{\prime},t_{1}^{\prime}\). The figure is inspired by Ref. [17].
\begin{table}
\begin{tabular}{||l|c c c c||} \hline \(s\) & \(r^{(0)}\) & \(r^{(1)}\) & \(r^{(2)}\) & maps \(W_{N}\) to \\ \hline \hline
0 & 1 & 2 & \(\emptyset\) & \(W_{N-1}\) \\ \hline
1 & 020 & 00 & 0 & \(W_{N-2}\) \\ \hline
2 & 010010 & 01010 & 010 & \(W_{N-3}\) \\ \hline
00 & 10201010201 & 102010201 & 10201 & \(W_{N-4}\) \\ \hline \end{tabular}
\end{table}
Table 1: For \(W_{N}\) and particular strings \(s=0,1,2,00\), the occurrences between \(s\) can be one of \(r^{(i)}\), and map to \(W_{N-k}\) under the map \(\kappa:r^{(i)}\mapsto i\), \(i=0,1,2\).
On-Site Model
The \(N\)th approximant of the OTC is defined as
\[H_{N}^{o}=\sum_{n=0}^{T_{N}-1}\epsilon_{w_{n}}\ket{n}\bra{n}-t\big{(}\ket{n+1\bmod T _{N}}\bra{n}+H.c.\big{)}, \tag{21}\]
where periodic boundary conditions are enforced. When writing
\[H_{N}^{o}=H_{0,N}^{o}+H_{1,N}^{o}, \tag{22}\]
the part \(H_{1,N}^{o}\) consists of all \(t\) bonds and \(H_{0,N}^{o}\) only the on-site energies. At \(t=0\), the chain consists of \(T_{N-1},T_{N-2}\) and \(T_{N-3}\) isolated sites with energy \(E=0,\epsilon_{1},\epsilon_{2}\), respectively. When \(t\) is nonzero, the degeneracy is lifted and the spectrum consists of three bands, as depicted in Fig. 4.
The analysis in Section III.1.1 can be immediately carried over to the three atomic chains of the on-site model, to approximate each of the three bands as a general HTC \(H_{N-k}^{(p,q)}\). For a general OTC parameterized by \(c_{1}\) and \(c_{2}\), the result of the perturbation theory (see Appendix B.1) reads
\[H_{N}^{o}\approx(z_{0}H_{N-1}^{(p_{0},q_{0})}+\epsilon_{0})\oplus(z_{1}H_{N-2 }^{(p_{1},q_{1})}+\epsilon_{1})\oplus(z_{2}H_{N-3}^{(p_{2},q_{2})}+\epsilon_{2}), \tag{23}\]
where \(z_{0}=t,z_{1}=t/c_{1},z_{2}=t/[c_{2}^{2}(c_{2}-c_{1})]\) and \(p_{i}=\log a_{i}/\log\rho,q_{i}=\log b_{i}/\log\rho\) for \(i=0,1,2\) where \(a_{i},b_{i}\) are given in Table 3 in the appendix.
As a final remark, by the recurrence property Eq. (11) of the infinite word \(W\), the approximations Eqs. (20) and (23) are also valid in the infinite limit, where the subscripts \(N,N-k\) are dropped.
### Hopping and On-Site Equivalence
The RG scheme of the HTC Eq. (20) can be repeatedly applied to the five HTC Hamiltonians in the direct sum. For the OTC, the same is true after one application of the RG Eq. (23). Considering the infinite HTC and OTC, the Hamiltonian after \(m\) applications of the RG is described by \(5^{m}\) and \(3\cdot 5^{m-1}\) pairs of \(p,q\) values, respectively. We will show that the HTC and OTC are equivalent, in the sense that for both models, the fraction of \(p,q\) values that escape to infinity tends to one as \(m\rightarrow\infty\).
For the HTC, define \(I_{m}=\{p_{i},q_{i}\mid i=1,\ldots,5^{m}\}\), the set of \(p,q\) values in the direct sum after \(m\) RG applications. Define the probability measure on the measurable space \((I_{m},2^{I_{m}})\) as
\[\mu_{m}(A):=|A|/|I_{m}|, \tag{24}\]
where \(2^{I_{m}}\) denotes the powerset, \(A\subset I_{m}\) and \(|\cdot|\) denotes the cardinality of the set. To study the divergence of \(p,q\) values, define the set of values smaller than \(m\) as
\[J_{m}:=\{x\in I_{m}\mid x\leq m\}. \tag{25}\]
For the OTC, all objects \(I_{m}^{o},J_{m}^{o},\mu_{m}^{o}\) are similarly defined.
The mathematical statement of the equivalence, as proven in Appendix C, reads
\[\lim_{m\rightarrow\infty}\mu_{m}(J_{m})=\lim_{m\rightarrow\infty}\mu_{m}^{o} (J_{m}^{o})=0. \tag{26}\]
This proves that for both the HTC and OTC, the set of \(p\) and \(q\) values that remain finite can be at most a set of measure zero. This means that both the HTC and OTC are described by an infinite direct sum of \(H^{(p,q)}\) Hamiltonians with \(p=q=\infty\), which are Tribonacci chains where only the \(t_{2}\) bonds are nonzero.
The similarity discussed in this work is a different notion of similarity than Niu and Nori [17] proved for the Fibonacci chain. In their case, all values would read \(p=1\) and \(q=1\) in Eqs. (20) and (23). Since the Fibonacci chain perturbatively renormalizes to exact scaled copies of itself, it can be viewed as a critical model. The Tribonacci chains renormalize perturbatively to different kinds of HTCs, viz. HTCs with \(p\neq 1\) and/or \(q\neq 1\). The limit of the RG procedure for the HTC yields infinitely many copies of the HTC with \(p=q=\infty\), which is quite different from the original model where \(p=q=1\). In this way, the HTC and OTC can be viewed as less critical than the Fibonacci chain. Regardless of this fact, in Section V it will be shown that the eigenstates of the HTC are critical.
Figure 4: (Color online) The energy spectrum of the OTC Eq. (21) with \(c=5\) and \(T_{13}=3136\) sites. The three main bands sit around \(E/t=0,c,2c\). The inset shows that the main bands further split up like HTC Hamiltonians with certain \(p,q\) values.
Eigenstates on the Rauzy fractal
Considering the Hamiltonian \(H_{N}\) in Eq. (15) (or Eq. (21)), the Schrodinger equation \(H_{N}\left|\psi\right\rangle_{i}=E_{i}\left|\psi\right\rangle_{i}\) will have \(T_{N}\) solutions labeled by \(i=0,\ldots,T_{N}-1\). Each eigenstate has the form \(\left|\psi\right\rangle_{i}=\sum_{n=0}^{T_{N}}\psi_{i}(n)\left|n\right\rangle\), where \(\psi_{i}(n)\in\mathbb{C}\). The eigenstate \(\left|\psi\right\rangle_{i}\) can be plotted on the Rauzy fractal by identifying each point \(\mathbf{x}_{n}\) in Eq. (9) with the probability \(|\psi_{i}(n)|^{2}\), which determines the size of a black triangle at that point.
### Hopping Model
When associating HTC lattice points \(\left|n\right\rangle\) with Rauzy fractal points \(\mathbf{x}_{n}\), one has to apply a different coloring of the Rauzy fractal. Each site has no on-site energy, and can have the local environments
* Red: 01 (\(t_{0}\) on the left and \(t_{1}\) on the right) or 10,
* Green: 02 or 20,
* Blue: 00.
The eigenstate \(\left|\psi\right\rangle_{0}\) of the HTC \(H_{13}\) is plotted on the Rauzy fractal in Fig. 5(a). Since the energy \(E_{0}\) comes from the bottom branch of the spectrum in Fig. 2, it should be a state that antibonds on sites connected with \(t_{2}\) bonds. This is precisely reflected by the plot on the Rauzy fractal in Fig. 5(a), since the eigenstate is mainly localized in the green region, corresponding to a site neighboring a \(t_{2}\) bond. Generally, a state from branch 1 (or \(2,3,4,5\)), in this case from the lowest set of eigenvalues at \(E=-t_{2}\), in Fig. 2 is primarily localized in the green (or red, blue, red, green) region(s) in Fig. 5(a) (see Appendix D for more examples). Finally, for any \(H_{N}\), each red, green and blue region contains exactly \(T_{N-2},T_{N-3}\) and \(T_{N-4}\) points, matching the amount of points in each branch of the spectrum.
For each local structure, there are again exactly five distinct environments around that structure. For example, the environment of 01 or 10 is always \(x010y\) where \(xy=02,20,21,12\) or \(22\). It turns out that these correspond exactly to the local structures \(01,10,02,20,00\) of the type-1 molecular chain. The subdivisions of the Rauzy fractal are carried out in Fig. 5(b).
We have shown that if one is interested in all possible environments of a lattice site, it is enough to consider only the nearest-neighbor environments and the RG scheme. Using the RG scheme, next variations on the nearest-neighbor environments of a lattice site are given by the nearest-neighbor environments of the renormalized chain to which that lattice site belongs.
### On-Site Model
When plotting the eigenstates of the OTC onto the Rauzy fractal, the original coloring can be used, since each lattice site \(\left|n\right\rangle\) corresponds to some \(\epsilon_{w_{n}}\). The state with index \(i=0\) is plotted on the Rauzy fractal in Fig. 5(c). Since \(E_{0}\) comes from the bottom branch of the spectrum in Fig. 4, the eigenstate is localized on the red part of the Rauzy fractal which corresponds to 0 in \(W\). It is again a general feature that states from some branch in the spectrum localize on the corresponding part of the Rauzy fractal (see Appendix D for more examples).
Since the on-site model renormalizes to three hopping models in Eq. (23), additional subdivision of the Rauzy fractal based on next local environments yield a similar subdivision as for the hopping model. This is displayed in Fig. 5(d).
We would like to point out the similarity between the eigenstates \(\left|\psi\right\rangle_{0}\) in Fig. 5(a) and in the red region in Fig. 5(d). This can be understood by the fact that the eigenstate \(\left|\psi\right\rangle_{0}\) of the OTC \(H_{13}^{\prime}\) is approximately the eigenstate of the first block of Eq. (23), which is a HTC.
Another observation is the self-similar structure of the eigenstates on the Rauzy fractals in Fig. 5. This is a signature of critical eigenstates [34], which are also characteristic of the Fibonacci chain [15]. For the Tribonacci chains, fractality is discussed in Section V.
### Equivalence Local Environment and Local Resonator Modes
It is an interesting fact that all local environments are known from only the nearest-neighbor structures and the RG Eq. (20). This fact can be applied to elegantly categorize all LRMs of the HTC and OTC. This LRM framework was developed by Rontgen et al. [23], and applied to the Fibonacci chain.
In Figs. 6 and 7, the eigenstate magnitude on each lattice site with \(T_{7}=81\) sites is plotted for every energy level. The green lines define regions that precisely correspond to the diagonal blocks in Eqs. (20) and (23), so they correspond to one application of the RG scheme. By applying Eq. (20) again to each of these blocks of the Hamiltonian at hand, the regions subdivide again into five smaller ones (see black horizontal lines). The connection with the LRM framework is that the subsequent subdivisions order the eigenstates according to their local structure, i.e. where they are mostly localized. This classification is an essential step in the application of the LRM framework, which is a naturally carried out by the RG equations.
The RG scheme naturally gives all environments of a lattice site, and at the same time categorizes the LRMs. This simplification of the analysis is founded on the self-similarity of the Tribonacci word.
## V Multifractality
The perturbative RG scheme for the Fibonacci chain provided a natural way of explaining the multifractal
properties of the spectrum and of the eigenstates. Since an analogous RG scheme is derived for the Tribonacci chains, multifractality is expected to be present.
Since the multifractal properties of the HTC are compared to the Fibonacci chain, the definition of the Fibonacci chain is briefly reviewed here. The Fibonacci word \(W^{F}=w_{0}^{F}w_{1}^{F}\cdots\) is the fixed point of the binary substitution \(\rho_{F}:0\to 01,1\to 0\). The Fibonacci approximants are given by \(W_{N}^{F}:=\rho_{F}^{N}(1)\). The length of the approximants is given by the Fibonacci numbers \(F_{N}=F_{N-1}+F_{N-2}\), where \(F_{0}=F_{1}=1\). The Hamiltonian for the periodic hopping Fibonacci chain reads
\[H_{N}^{F}=\sum_{n=0}^{F_{N}-1}t_{w_{n}^{F}}\left|n+1\bmod F_{N}\right\rangle \left\langle n\right|+H.c., \tag{27}\]
where the hopping parameters \(t_{0},t_{1}\) are related by \(t_{0}/t_{1}=\rho\).
To study the multifractal properties of the spectrum of any Hamiltonian, we compute the multifractal dimensions \(D_{q}\), also known as the multifractal spectrum, introduced by Halsey et al. [35]. The multifractal spectrum is a family of dimensions that is continuously parameterized by \(q\in\mathbb{R}\), where \(D_{0}\) recovers the box-counting dimension. For the energy spectrum, the multifractal dimensions are computed as follows. First, cover the energy spectrum with a compact interval \(C\subset\mathbb{R}\). Then, partition \(C\) into intervals \(K_{i}\) of length \(l\). Let the measure \(\mu(K_{i})\) denote the fraction of points that lie in \(K_{i}\)
Figure 5: (Color online) The eigenstate \(\left|\psi\right\rangle_{0}\) on the Rauzy fractal of \(T_{13}=3136\) points. The regions are colored according to the local environment of a lattice site \(n\) in the HTC (or OTC), and the length of the black triangles are proportional to \(\left|\psi_{0}(n)\right|^{2}\). (a) \(\left|\psi\right\rangle_{0}\) of the HTC \(H_{13}\) with coupling \(\rho=0.2\) and coloring according to nearest-neighbor bonds. (b) \(\left|\psi\right\rangle_{0}\) of the HTC \(H_{13}\) with coupling \(\rho=0.2\) and coloring according to the five possible environments of the local structures in a). (c) \(\left|\psi\right\rangle_{0}\) of the OTC \(H_{13}^{\prime}\) with coupling \(c=5\) and and coloring according the on-site potential of a lattice site. (d) \(\left|\psi\right\rangle_{0}\) of the OTC \(H_{13}^{\prime}\) with coupling \(c=5\) and coloring according to the five possible environments of the lattice sites in c).
The multifractal dimensions are then given by
\[D_{q}=\lim_{l\downarrow 0}\frac{1}{q-1}\frac{\log\sum_{i}\mu(K_{i})^{q}}{\log l}. \tag{28}\]
The result is shown in Fig. 8(a), where the multifractal dimensions of the HTC \(H_{13}\) and the Fibonacci chain \(H_{19}^{F}\) are plotted. One can see that the HTC energy spectrum is a multifractal, since the spectrum \(D_{q}\) is a smooth curve of \(q\). Moreover, the multifractal dimensions of the HTC are strictly smaller than that of the Fibonacci chain.
For the eigenstates, the average multifractal dimension is computed. The average multifractal dimension of the eigenstates is defined as [20; 36]
\[\overline{D_{q}^{\psi}}=\frac{1}{q-1}\frac{\log\frac{1}{N}\sum_{i}\sum_{n}| \psi_{i}(n)|^{2q}}{\log 1/N}, \tag{29}\]
where the sum over \(i\) ranges over all eigenstates, \(N\) denotes the amount of eigenstates and \(n\) ranges over the lattice sites. The numerical results for the Fibonacci chain and HTC are displayed in Fig. 8(b). The average multifractal dimension of the HTC is lower than of the Fibonacci chain. This is to be expected, since \(\overline{D_{q}^{\psi}}\) is related to diffusive properties of the system [36]. The weakest bonds in the HTC are \(\mathcal{O}(\rho^{2})\), whereas in the Fibonacci they are \(\mathcal{O}(\rho)\). This makes it more difficult for a particle to diffuse in the HTC than in the Fibonacci chain, which is in accordance with the fact that the average multifractal dimension for the HTC is lower than of the Fibonacci chain. Additionally, a lower average multifractal dimension indicates that the wavefunctions are more localized, which is a consequence of the weaker bonds in the HTC. In fact, the HTC is a critical chain in terms of Anderson localization, since the eigenstates are multifractal with \(0<\overline{D_{q}^{\psi}}<1\)[34]. Finally, because the OTC is approximately a direct product of HTC Hamiltonians in Eq. (23), the multifractal properties perturbatively carry over to the OTC.
## VI Conclusion
In this work, we introduced two tight-binding chains Eqs. (13) and (14), based on the Tribonacci substitution, which generalizes the Fibonacci chain. One of the first
Figure 7: (Color online) The OTC eigenstates \(\ket{\psi}_{i}\) of \(H_{7}^{\sigma}\). The colors and green/black lines have the same meaning as in Fig. 6.
Figure 6: (Color online) The HTC eigenstates \(\ket{\psi}_{i}\) of \(H_{7}\), ordered such that \(E_{i}<E_{i+1}\). The sign and magnitude on each site is represented by a color. The green lines denote the splitting after one RG step, the black lines denote two RG steps. Note that the states between two subsequent lines localize on similar local environments, which is more accurate for the black lines than for the green lines.
steps towards understanding these models are the RG Eqs. (20) and (23), which are more accurate than those for the Fibonacci chain due to the higher orders of perturbation theory required. As shown in Section III.2, the two models can be regarded as equivalent at the RG fixed point. The Rauzy fractal, which is inherent to the Tribonacci word, is shown to serve as the analog of the conumbers for the HTC and OTC, since it orders the sites according to their local environment. The structure of eigenstates, when plotted on the Rauzy fractal, shows self-similar properties, which reflect the fractal nature of the eigenstates. These self-similar structures can be systematically enumerated using the RG scheme, and are exactly the LRMs within the framework proposed by Rontgen et al. [23]. Finally, the multifractal dimensions of both the energy spectrum and the eigenstates of the HTC have been computed, and compared to those of the Fibonacci chain. The multifractal properties are qualitatively similar to those of the Fibonacci chain, whereas the multifractal dimensions of the HTC are generally smaller than those of the Fibonacci chain. Furthermore, the HTC is shown to be a critical model in terms of Anderson localization, since the wavefunctions exhibit multifractal properties with a dimension between zero and one.
This work opens some interesting topics for further research. First of all, it would be interesting to identify an equivalence between the HTC and another model, such as the one by Kraus and Zilberberg [21] for the Fibonacci chain. Such an equivalence would be key to understanding the topological properties of the HTC. One could also generalize the substitution to any Pisot substitution, or consider the general \(k\)-bonacci substitution \(0\to 01,1\to 02,\ldots,(k-1)\to 0\). The latter would make the generalization of the Fibonacci chain as complete as the complementary generalization in Refs.[28; 29; 30]. Yet another proposition to check is whether quasicrystals can generally be studied via their internal space, which is conumbering for the Fibonacci chain and the Rauzy fractal for the HTC, and how the RG scheme can be applied in the internal space to understand the eigenstates. Since the RG scheme originates from the self-similar structure, it could be interesting to study if self-similarity can replace translational invariance in the topological classification of quasicrystals and/or fractals. Finally, experimental realizations, such as polaritonic waveguides [37] and dielectric resonators [38] for the Fibonacci chain, can be realized to probe the electronic and multifractal properties of the HTC and OTC.
###### Acknowledgements.
This publication is part of the project TOPCORE with Project No. OCENW.GROOT.2019.048 which is financed by the Dutch Research Council (NWO).
| We introduce two 1D tight-binding models based on the Tribonacci substitution, the hopping and on-site Tribonacci chains, which generalize the Fibonacci chain. For both hopping and on-site models, a perturbative real-space renormalization procedure is developed. We show that the two models are equivalent at the fixed point of the renormalization group flow, and that the renormalization procedure naturally gives the Local Resonator Modes. Additionally, the Rauzy fractal, inherent to the Tribonacci substitution, is shown to serve as the analog of conumbering for the Tribonacci chain. The renormalization procedure is used to repeatedly subdivide the Rauzy fractal into copies of itself, which can be used to describe the eigenstates in terms of Local Resonator Modes. Finally, the multifractal dimensions of the energy spectrum and eigenstates of the hopping Tribonacci chain are computed, from which it can be concluded that the Tribonacci chains are critical.
** |
2310.13814 | The Turán and Laguerre inequalities for quasi-polynomial-like
functions | This paper deals with both the higher order Tur\'an inequalities and the
Laguerre inequalities for quasi-polynomial-like functions -- that are
expressions of the form $f(n)=c_l(n)n^l+\cdots+c_d(n)n^d+o(n^d)$, where
$d,l\in\mathbb{N}$ and $d\leqslant l$. A natural example of such a function is
the $A$-partition function $p_{A}(n)$, which enumerates the number of
partitions of $n$ with parts in the fixed finite multiset
$A=\{a_1,a_2,\ldots,a_k\}$ of positive integers. For an arbitrary positive
integer $d$, we present efficient criteria for both the order $d$ Tur\'an
inequality and the $d$th Laguarre inequality for quasi-polynomial-like
functions. In particular, we apply these results to deduce non-trivial
analogues for $p_A(n)$. | Krystian Gajdzica | 2023-10-20T21:05:21 | http://arxiv.org/abs/2310.13814v1 | # The Turan and Laguerre Inequalities for Quasi-Polynomial-like Functions
###### Abstract.
This paper deals with both the higher order Turan inequalities and the Laguerre inequalities for quasi-polynomial-like functions -- that are expressions of the form \(f(n)=c_{l}(n)n^{l}+\cdots+c_{d}(n)n^{d}+o(n^{d})\), where \(d,l\in\mathbb{N}\) and \(d\leqslant l\). A natural example of such a function is the \(A\)-partition function \(p_{A}(n)\), which enumerates the number of partitions of \(n\) with parts in the fixed finite multiset \(A=\{a_{1},a_{2},\ldots,a_{k}\}\) of positive integers. For an arbitrary positive integer \(d\), we present efficient criteria for both the order \(d\) Turan inequality and the \(d\)th Laguerre inequality for quasi-polynomial-like functions. In particular, we apply these results to deduce non-trivial analogues for \(p_{A}(n)\).
Key words and phrases:integer partition, \(A\)-partition function, quasi-polynomial, log-concavity, higher order Turan inequalities, Laguerre inequalities 2020 Mathematics Subject Classification: Primary 11P82, 11P84; Secondary 05A17
## 1. Introduction
A partition of a non-negative integer \(n\) is a weakly-decreasing sequence of positive integers \(\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{j})\) such that
\[n=\lambda_{1}+\lambda_{2}+\cdots+\lambda_{j}.\]
The numbers \(\lambda_{i}\) are called parts of the partition \(\lambda\). The partition function \(p(n)\) enumerates all partitions of \(n\). For instance, there are \(5\) partitions of \(4\), namely, \((4)\), \((3,1)\), \((2,2)\), \((2,1,1)\) and \((1,1,1,1)\) -- in other words \(p(4)=5\). We do not know any easy formula for \(p(n)\). However, Euler proved that its generating function takes the form
\[\sum_{n=0}^{\infty}p(n)x^{n}=\prod_{i=1}^{\infty}\frac{1}{1-x^{i}}.\]
The partition theory plays a crucial rule in many parts of mathematics and other sciences. In statistical mechanics, the well-known Rogers-Ramanujan identities are related to the solution of the hard hexagon model, see [3, 7]. Further, partitions have applications in molecular chemistry, crystallography and quantum mechanics, as a consequence of the fact that all irreducible representations of the permutation group \(S_{n}\) and the unitary group \(U(n)\) might be labelled by them. It is also worth noting that partitions appear in genetics in the so-called Ewens's sampling formula, see [24, 34]. There is a plethora of works devoted to the theory of partitions. For a general introduction to the topic, we encourage the reader to see Andrews' books [4, 5] as well as [1, 31, 45].
Now, let us assume that \(A=\{a_{1},a_{2},\ldots,a_{k}\}\) is a finite multiset of positive integers. By an \(A\)-partition of a non-negative integer \(n\), we mean any partition \(\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{j})\) of \(n\) with parts in \(A\). For the sake of clarity, we additionally assume that two \(A\)-partitions are considered the same if there is only a difference in the order of their parts. The \(A\)-partition function \(p_{A}(n)\) enumerates all \(A\)-partitions
of \(n\). In particular, we have that \(p_{A}(n)=0\) whenever \(n\) is a negative integer and \(p_{A}(0)=1\) with \(\lambda=()\). The generating function for \(p_{A}(n)\) is given by
\[\sum_{n=0}^{\infty}p_{A}(n)x^{n}=\prod_{a\in A}\frac{1}{1-x^{a}}. \tag{1.1}\]
For example, if \(A=\{1,2,2,3,3,3,4,4\}\), then we have that \(p_{A}(4)=11\), namely: (4), (4), (3,1), (3,1), (3,1), (2,2), (2,2), (2,2), (2,1,1), (2,1,1) and \((1,1,1)\).
There is an abundance of literature devoted to \(A\)-partition function when \(\#A<\infty\). We refer the reader to, for instance, [2, 10, 15, 21, 25, 37, 44, 49].
It turns out that \(p_{A}(n)\) is a quasi-polynomial whenever \(A\) is a finite set or a multiset of positive integers. More precisely, if \(\#A=k\), then the \(A\)-partition function is an expression of the form
\[p_{A}(n)=b_{k-1}(n)n^{k-1}+b_{k-2}(n)n^{k-2}+\cdots+b_{0}(n), \tag{1.2}\]
where the coefficients \(b_{0}(n),b_{1}(n),\ldots,b_{k-1}(n)\) depend on the residue class of \(n\,(\mathrm{mod\,\,lcm}A)\). The first proof of the above fact is probably due to Bell [10]. We encourage the reader to see Stanley's book [46, Section 4.4] for more information about quasi-polynomials. On the other hand, a quasi-polynomial-like function \(f(n)\) is a function which asymptotically behaves like a quasi-polynomial. More specifically, \(f(n)\) can be written as
\[f(n)=c_{l}(n)n^{l}+c_{l-1}(n)n^{l-1}+\cdots+c_{r}(n)n^{r}+o(n^{r}), \tag{1.3}\]
where \(r,l\in\mathbb{N}\), \(l\geqslant r\), the coefficients \(c_{r}(n),c_{r+1}(n),\ldots,c_{l}(n)\) depend on the residue class of \(n\,(\mathrm{mod\,\,}M)\) for some positive integer \(M\geqslant 2\). In particular, we see that \(p_{A}(n)\) is a quasi-polynomial-like function.
This paper deals with two problems. The first of them concerns the so-called higher order Turan inequalities for quasi-polynomial-like functions. Let us recall that a sequence \(\left(\omega_{i}\right)_{i=0}^{\infty}\) of real numbers satisfies the second order Turan inequality if we have that
\[\omega_{n}^{2}\geqslant\omega_{n+1}\omega_{n-1}\]
for all \(n\geqslant 1\). Further, it fulfills the third order Turan inequality if the following
\[4(\omega_{n}^{2}-\omega_{n-1}\omega_{n+1})(\omega_{n+1}^{2}-\omega_{n}\omega_ {n+2})\geqslant(\omega_{n}\omega_{n+1}-\omega_{n-1}\omega_{n+2})^{2}\]
is true for every \(n\geqslant 1\). More generally, if \(J_{\omega}^{d,n}(x)\) are the Jensen polynomials of degree \(d\) and shift \(n\) associated to the sequence \(\omega:=(\omega_{i})_{i=0}^{\infty}\), defined by
\[J_{\omega}^{d,n}(x):=\sum_{i=0}^{d}\binom{d}{i}\omega_{n+i}x^{i},\]
then it is known that \((\omega_{i})_{i=0}^{\infty}\) satisfies the order \(d\) Turan inequality at \(n\) if and only if \(J_{\omega}^{d,n}(x)\) is hyperbolic, i.e. all of its roots are real numbers (see, [17, 18, 19, 28]).
In 2015 DeSalvo and Pak [20] reproved the result (obtained independently by Nicolas [38] in the '70s) that the partition function \(p(n)\) satisfies the second order Turan inequality for all \(n>25\). Afterwards, Chen [13] conjectured that the third order Turan inequality for \(p(n)\) is valid for all \(n\geqslant 94\). The problem was solved by Chen, Jia and Wang [14] and motivated them to state another conjecture that for each \(d\geqslant 1\) there is some integer \(N_{p}(d)\) such that the associated Jensen polynomial \(J_{p}^{d,n}(X)\) is hyperbolic for all \(n\geqslant N_{p}(d)\). That conjecture, on the other hand, was established by Griffin et al. [28]. It is worth pointing out that Larson and Wagner [35] discovered efficient upper bound for the value of \(N_{p}(d)\) for any \(d\).
The aforementioned results have initiated vast research related to discovering similar properties for other variations of the partition function. Iskander et al. [33]
proved that for every \(d\geqslant 2\) the fractional partition function \(p_{\alpha}(n)\), which is defined for \(\alpha\in\mathbb{Q}\) in terms of the following generating function
\[\sum_{n=0}^{\infty}p_{\alpha}(n)x^{n}:=\prod_{i=1}^{\infty}\frac{1}{(1-x^{i})^{ \alpha}}\]
(for more information, see [12]), satisfies the order \(d\) Turan inequality for all but finitely many values of \(n\). Further, Craig and Pun [16] investigated the so-called \(k\)-regular partition function \(p_{k}(n)\) (i.e. \(p_{k}(n)\) enumerates only those partitions of \(n\) whose parts are not divisible by \(k\)) in that context. They obtained that for every \(k\geqslant 2\) and \(d\geqslant 1\) the associated Jensen polynomial \(J_{p_{k}}^{d,n}(X)\) is hyperbolic for all sufficiently large numbers \(n\). Heim, Neuhauser and Troger [30] investigated the plane partition function \(PL(n)\) (see Andrews [4, Chapter 11] or [5, Chapter 10]) and its polynomization in this direction. They conjectured that for any \(d\geqslant 1\) the plane partition function fulfills the order \(d\) Turan inequality for all large enough numbers \(n\). That conjecture was solved by Ono, Pujahari and Rolen in [41] with explicit bounds provided by Ono's PhD student Pandey [42]. Further, Baker and Males [8] showed that the number \(\overline{p}_{j}(n)\) of partitions with BG-rank \(j\), and the number \(\overline{p}_{j}(a,b;n)\) of partitions with BG-rank \(j\) and \(2\)-quotient rank congruent to \(a\,(\mathrm{mod}\;b)\) satisfy (asymptotically) all higher order Turan inequalities for even values of \(j\) and \(n\). We refer the reader to Berkovich and Garvan's paper [11] for additional information about \(\overline{p}_{j}(n)\) and \(\overline{p}_{j}(a,b;n)\). Finally, Dong, Ji and Jia [23] discovered that the Jensen polynomial corresponding to \(d\geqslant 1\) and the Andrews and Paule's broken \(k\)-diamond partition function \(\Delta_{k}(n)\), namely \(J_{\Delta_{k}}^{d,n}(X)\), is hyperbolic for \(k=1\) or \(2\) and all but finitely many positive integers \(n\). The explicit definition of broken \(k\)-diamond partitions (for any \(k\geqslant 1\)) together with some properties of \(\Delta_{k}(n)\) might be found in Andrews and Paule's paper [6]. The above-mentioned results have been our motivation to study the higher order Turan inequalities for both quasi-polynomial-like functions in general and \(A\)-partition functions in particular.
The second issue which this paper deals with concerns the so-called Laguerre inequalities for quasi-polynomial-like functions. Once again, let us assume that \(\omega=(\omega_{i})_{i=0}^{\infty}\) is a sequence of real numbers. For a fixed non-negative integer \(d\), we say that \(\omega\) satisfies the Laguerre inequality of order \(d\) at \(n\) if
\[\sum_{j=0}^{2d}(-1)^{j+d}\binom{2d}{j}\omega_{n+j}\omega_{n+2d-j}\geqslant 0. \tag{1.4}\]
The discrete Laguerre inequalities (1.4) were firstly introduced by Wang and Yang [52]. It is also worth noting that Wagner [51, Theorem 1.4] defined them equivalently by dividing (1.4) by \((2d)!\). For \(d=1\), one can easy observe that (1.4) reduces to the second order Turan inequality. If \(d=2\), then (after simplification) we get
\[3\omega_{n+2}^{2}-4\omega_{n+1}\omega_{n+3}+\omega_{n}\omega_{n+4}\geqslant 0.\]
Further, the order \(3\) Laguerre inequality might be written equivalently as follows:
\[10\omega_{n+3}^{2}-15\omega_{n+2}\omega_{n+4}+6\omega_{n+1}\omega_{n+5}- \omega_{n}\omega_{n+6}\geqslant 0,\]
and so on.
Wang and Yang [52, 53] investigated Lagurre inequalities for many combinatorial sequences. In particular, they showed that the partition function, the overpartition function, the Motzkin numbers, the Fine numbers, the Domb numbers and the distinct partition function satisfy the order \(2\) Laguerre inequality. More recently, Yang [55] also proved that the broken \(k\)-diamond partition function fulfills the second order Laguerre inequality. On the other hand, Wagner [51] showed that the partition function satisfies the inequality (1.4) for every non-negative integer
and all sufficiently large values of \(n\). The aforementioned results have motivated us to investigate the issue in the case of quasi-polynomial-like functions.
At the end of Introduction, it needs to be pointed out that studying both the higher order Turan inequalities and the Laguerre inequalities is not only art for art's sake. Let us recall that a real entire (i.e. analytic at all points of the complex plane \(\mathbb{C}\)) function
\[f(x)=\sum_{n=0}^{\infty}a_{n}\frac{x^{n}}{n!}\]
is in the \(\mathcal{LP}\) (Laguerre-Polya) class if it may be written as
\[f(x)=cx^{k}e^{-ax^{2}+bx}\prod_{n=1}^{\infty}\left(1+\frac{x}{x_{n}}\right)e^{ -\frac{x}{x_{n}}},\]
where \(a,b,c,x_{1},x_{2},\ldots\) are all real numbers with \(a\geqslant 0\), \(k\) is a non-negative integer and \(\sum_{n=1}^{\infty}x_{n}^{-2}<\infty\). For the background of the theory of the \(\mathcal{LP}\) functions, we encourage the reader to see [36, 43]. It turns out that the Riemann hypothesis is equivalent to the statement that the Riemann \(\Xi\)-function
\[\Xi(z):=\frac{1}{2}\left(-z^{2}-\frac{1}{4}\right)\pi^{\frac{iz}{2}-\frac{1}{ 4}}\Gamma\left(-\frac{iz}{2}+\frac{1}{4}\right)\zeta\left(-iz+\frac{1}{2}\right)\]
is in the \(\mathcal{LP}\) class, where \(\Gamma\) is the gamma function, and \(\zeta\) denotes the Riemann zeta function. There is a necessary condition for the Riemann \(\Xi\)-function to be in the Laguerre-Polya class which states that the Maclaurin coefficients of the \(\Xi\)-function have to fulfill the order \(d\) Turan inequality as well as the Laguerre inequality of order \(d\) for every positive integer \(d\). For additional information, we refer the reader to [22, 40, 47].
This manuscript is organized as follows. Section 2 delivers necessary concepts, notations and properties which are used throughout the paper. Section 3 studies the higher order Turan inequalities for both quasi-polynomial-like functions and \(A\)-partition functions. In Section 4, on the other hand, we deal with the Laguerre inequalities. Finally, Section 5 contains some concluding remarks and open problems.
## 2. Preliminaries
At first, we fix some notation. The set of non-negative integers is denoted by \(\mathbb{N}\). Further, we put \(\mathbb{N}_{+}:=\mathbb{N}\setminus\{0\}\) and \(\mathbb{N}_{\geqslant k}:=\mathbb{N}\setminus\{0,1,\ldots,k-1\}\).
For a finite multiset \(A=\{a_{1},a_{2},\ldots,a_{k}\}\) of positive integers, we associate the \(A\)-partition function \(p_{A}(n)\), which was defined in Introduction. Due to Bell's theorem [10], we know that \(p_{A}(n)\) is a quasi-polynomial given by the equality (1.2), where the coefficients \(b_{0}(n),b_{1}(n),\ldots,b_{k-1}(n)\) depend on the residue class of \(n\,(\mathrm{mod}\ \mathrm{lcm}A)\). It turns out that under some additional assumptions on \(A\), we may determine some of the coefficients \(b_{i}(n)\). That is a result obtained by several authors, among others, Almkvist [2], Beck et al. [9] or Israailov [32]. We present the theorem due to Almkvist [2]. In order to do that, let us define symmetric polynomials \(\sigma_{i}(x_{1},x_{2},\ldots,x_{k})\) in terms of the power series expansion
\[\sum_{m=0}^{\infty}\sigma_{m}(x_{1},x_{2},\ldots,x_{k})t^{m}:=\prod_{i=1}^{k} \frac{x_{i}t/2}{\sinh(x_{i}t/2)}.\]
Now, we have the following.
**Theorem 2.1** (Almkvist).: _Let \(A=\{a_{1},a_{2},\ldots,a_{k}\}\) be fixed and put \(s_{1}:=a_{1}+a_{2}+\cdots+a_{k}\). For a given integer \(1\leqslant j\leqslant k\), if \(\gcd B=1\) for every \(j\)-element multisubset (\(j\)-multisubset) \(B\) of \(A\), then_
\[p_{A}(n)=\frac{1}{\prod_{i=1}^{k}a_{i}}\sum_{i=0}^{k-j}\sigma_{i}(a_{1},a_{2}, \ldots,a_{k})\frac{(n+s_{1}/2)^{k-1-i}}{(k-1-i)!}+O(n^{j-2}).\]
One can check that \(\sigma_{i}=0\) if \(i\) is odd. Furthermore, if we set \(s_{m}:=a_{1}^{m}+a_{2}^{m}+\cdots+a_{k}^{m}\), then
\[\sigma_{0}=1,\ \sigma_{2}=-\frac{s_{2}}{24},\ \sigma_{4}=\frac{5s_{2}^{2}+2s_{4 }}{5760},\ \sigma_{6}=-\frac{35s_{2}^{3}+42s_{2}s_{4}+16s_{6}}{2903040}.\]
Essentially, Theorem 2.1 maintains that if \(\gcd B=1\) for every \((k-j)\)-multisubset \(B\) of \(A\), then the coefficients \(b_{k-1}(n),b_{k-2}(n),\ldots,b_{k-1-j}(n)\) in the equality (1.2) are independent of the residue class of \(n\,(\text{mod lcm}A)\), i.e. they are constants and can be explicitly calculated. Moreover, it is noteworthy that the \(A\)-partition function is a non-trivial example of a quasi-polynomial-like function -- that is an expression of the form (1.3).
Now, let us recall some terminology related to higher order Turan inequalities. Instead of repeating the discussion from Introduction, we directly explain how the order \(d\) Turan inequality arises from the hyperbolicity of the Jensen polynomial \(J_{\omega}^{d,n}(x)\) has to be hyperbolic. Let
\[g(x)=c_{s}x^{s}+c_{s-1}x^{s-1}+c_{s-2}x^{s-2}+\cdots+c_{0}\]
be a fixed polynomial with real coefficients and denote all its complex roots by \(\alpha_{1},\alpha_{2},\ldots,\alpha_{s}\). By \(P_{m}\), we mean the \(m\)-th Newton's sum of \(g(x)\), which is given by
\[P_{m}=\begin{cases}s,&\text{if }m=0,\\ \alpha_{1}^{m}+\alpha_{2}^{m}+\cdots+\alpha_{s}^{m},&\text{if }m=1,2,3,4, \ldots.\end{cases}\]
Further, for the sums \(P_{0},\ldots,P_{2s-2}\), we associate the Hankel matrix \(H(g)\), namely
\[H(g):=\begin{bmatrix}P_{0}&P_{1}&P_{2}&\cdots&P_{s-1}\\ P_{1}&P_{2}&P_{3}&\cdots&P_{s}\\ P_{2}&P_{3}&P_{4}&\cdots&P_{s+1}\\ \vdots&\vdots&\vdots&\vdots&\vdots\\ P_{s-2}&P_{s-1}&P_{s}&\cdots&P_{2s-3}\\ P_{s-1}&P_{s}&P_{s+1}&\cdots&P_{2s-2}\end{bmatrix}.\]
The classical Hermit's theorem [39] states that \(g(x)\) is hyperbolic if and only if the matrix \(H(g)\) is positive semi-definite. Since each of the Newton's sums might be expressed in terms of the coefficients \(c_{s},c_{s-1},\ldots,c_{0}\), Hermit's result provides a set of inequalities on them by
\[\det\Big{[}P_{0}\Big{]}\geqslant 0,\det\begin{bmatrix}P_{0}&P_{1}\\ P_{1}&P_{2}\end{bmatrix}\geqslant 0,\ldots,\det\begin{bmatrix}P_{0}&P_{1}&P_{2}& \cdots&P_{s-1}\\ P_{1}&P_{2}&P_{3}&\cdots&P_{s}\\ P_{2}&P_{3}&P_{4}&\cdots&P_{s+1}\\ \vdots&\vdots&\vdots&\vdots&\vdots\\ P_{s-2}&P_{s-1}&P_{s}&\cdots&P_{2s-3}\\ P_{s-1}&P_{s}&P_{s+1}&\cdots&P_{2s-2}\end{bmatrix}\geqslant 0.\]
Now, if we assign the Jensen polynomial \(J_{\omega}^{d,n}(x)\) for an arbitrary sequence \(\omega=(w_{i})_{i=1}^{\infty}\), then the corresponding inequality for the determinant of the main minor \(l\times l\) of \(H(J_{\omega}^{d,n})\):
\[\det\begin{bmatrix}P_{0}&P_{1}&P_{2}&\cdots&P_{l-1}\\ P_{1}&P_{2}&P_{3}&\cdots&P_{l}\\ P_{2}&P_{3}&P_{4}&\cdots&P_{l+1}\\ \vdots&\vdots&\vdots&\vdots&\vdots\\ P_{l-2}&P_{l-1}&P_{l}&\cdots&P_{2l-3}\\ P_{l-1}&P_{l}&P_{l+1}&\cdots&P_{2l-2}\end{bmatrix}\geqslant 0\]
is called the order \(l\) Turan inequality for the sequence \(\omega\). In particular, it means that \(J_{\omega}^{d,n}(x)\) is hyperbolic if and only if the sequence \(\omega_{n}=(w_{n+j})_{j=1}^{\infty}\) satisfies the order \(l\) Turan inequality for every \(l\in\{1,2,\ldots,d\}\).
From the above discussion, we see that investigating the higher order Turan inequalities does not seem to be an easy challenge. However, there is a paper due to Griffin, Ono, Rolen and Zagier [28], which delivers an efficient criterion to deal with that issue.
**Theorem 2.2** (Griffin, Ono, Rolen, Zagier).: _Let \((\omega_{n})_{n=0}^{\infty}\) be a sequence of real numbers. Suppose further that \((E(n))_{n=0}^{\infty}\) and \((\delta(n))_{n=0}^{\infty}\) are sequences of positive real numbers with \(\lim_{n\to\infty}\delta(n)=0\), and that \(F(t)=\sum_{i=0}^{\infty}c_{i}t^{i}\) is a formal power series with complex coefficients. For a fixed \(d\geqslant 1\), suppose that there are sequences \(\left(C_{0}(n)\right)_{n=0}^{\infty},\left(C_{1}(n)\right)_{n=0}^{\infty}, \ldots,\left(C_{d}(n)\right)_{n=0}^{\infty}\) of real numbers, with \(\lim_{n\to\infty}C_{i}(n)=c_{i}\) for \(0\leqslant i\leqslant d\), such that for \(0\leqslant j\leqslant d\), we have_
\[\frac{\omega_{n+j}}{\omega_{n}}E(n)^{-j}=\sum_{i=0}^{d}C_{i}(n)\delta(n)^{i}j ^{i}+o\left(\delta(n)^{d}\right)\qquad\text{ as }n\to\infty.\]
_Then, we have_
\[\lim_{n\to\infty}\left(\frac{\delta(n)^{-d}}{\omega_{n}}J_{\omega}^{d,n}\left( \frac{\delta(n)x-1}{E(n)}\right)\right)=H_{F,d}(x),\]
_uniformly for \(x\) in any compact subset of \(\mathbb{R}\), where the polynomials \(H_{F,m}(x)\in\mathbb{C}[x]\) are defined either by the generating function \(F(-t)e^{xt}=\sum_{m=0}^{\infty}H_{F,m}(x)t^{m}/m!\) or in closed form by \(H_{F,m}(x):=m!\sum_{l=0}^{m}(-1)^{m-l}c_{m-l}x^{l}/l!\)._
It is not clear how one can apply the above result in practice. In fact, Griffin et al. use the criterion to prove that for every positive integer \(d\) the partition function \(p(n)\) fulfills the order \(d\) Turan inequality for all but finitely many values of \(n\). More precisely, they obtain the Hermite polynomials \(H_{m}(x)\) as the polynomials \(H_{F,m}(x)\) in Theorem 2.2. Let us recall that they define the Hermit polynomials via the generating function
\[\sum_{j=0}^{\infty}H_{j}(x)\frac{t^{j}}{j!}=e^{-j^{2}+jx}=1+jx+\frac{j^{2}}{2! }(x^{2}-2)+\cdots.\]
Since these polynomials have only distinct real roots, and since the property of a polynomial with only real roots is invariant under small deformation, the required phenomenon for \(p(n)\) follows.
On the other hand, we investigate the higher order Turan inequalities for quasi-polynomial-like functions
\[f(n)=c_{l}(n)n^{l}+c_{l-1}(n)n^{l-1}+\cdots+c_{r}(n)n^{r}+o(n^{r}),\]
where \(r,l\in\mathbb{N}\), \(l\geqslant r\) and the coefficients \(c_{r}(n),c_{r+1}(n),\ldots,c_{l}(n)\) depend on the residue class of \(n\,(\mathrm{mod}\;M)\) for some positive integer \(M\geqslant 2\). Therefore, we will probably get another family of orthogonal polynomials in Theorem 2.2. The generalized Laguerre polynomials \(L_{n}^{(\alpha)}(x)\) for \(\alpha>-1\) are defined via the following conditions of orthogonality and normalization
\[\int_{0}^{\infty}e^{-x}x^{\alpha}L_{n}^{(\alpha)}(x)L_{m}^{(\alpha)}(x)dx= \Gamma(\alpha+1)\binom{n+\alpha}{n}\delta_{n,m},\]
where \(\Gamma\) denotes the Euler gamma function, \(\delta_{i,j}\) is the Kronecker delta and \(n,m=0,1,2,\ldots.\) Moreover, we demand that the coefficient of \(x^{n}\) in the polynomial \(L_{n}^{(\alpha)}(x)\) of degree \(n\) have the sign \((-1)^{n}\). One can figure out the explicit representation of these polynomials, namely,
\[L_{n}^{(\alpha)}(x)=\sum_{j=0}^{n}\binom{n+\alpha}{n-j}\frac{(-x)^{j}}{j!}.\]
Hence, we have that
\[L_{0}^{(\alpha)}(x) =1,\] \[L_{1}^{(\alpha)}(x) =-x+(\alpha+1),\] \[L_{2}^{(\alpha)}(x) =\frac{x^{2}}{2}-(\alpha+2)x+\frac{(\alpha+1)(\alpha+2)}{2},\] \[L_{3}^{(\alpha)}(x) =\frac{-x^{3}}{6}+\frac{(\alpha+3)x^{2}}{2}-\frac{(\alpha+2)( \alpha+3)x}{2}+\frac{(\alpha+1)(\alpha+2)(\alpha+3)}{6},\]
and so on. It is well-known that if \(\alpha\) is non-negative, then \(L_{n}^{(\alpha)}(x)\) has exactly \(n\) positive real roots. For more information about both the Hermite polynomials and the Laguerre polynomials we encourage the reader to see [48].
Finally, instead of repeating the text from Introduction related to the Laguerre inequalities, we just recall that for an arbitrary sequence \(\omega=\left(\omega_{i}\right)_{i=0}^{\infty}\) of real numbers the Laguerre inequality of order \(d\) at \(n\) is defined via
\[\sum_{j=0}^{2d}(-1)^{j+d}\binom{2d}{j}\omega_{n+j}\omega_{n+2d-j}\geqslant 0.\]
In order to deal with this issue for quasi-polynomial-like functions we will need some basic identities involving binomial coefficients, which are omitted here and collected in Section 4.
Now, we are ready to proceed to the main part of the manuscript.
## 3. The higher order Turan inequalities for quasi-polynomial-like functions
The main goal of this section is to prove the following characterization.
**Theorem 3.1**.: _Let \(f(n)\) be a quasi-polynomial-like function of the form_
\[f(n)=c_{l}n^{l}+c_{l-1}n^{l-1}+\cdots+c_{l-d}n^{l-d}+o(n^{l-d}),\]
_for some \(1\leqslant d\leqslant l\). Then, for every \(1\leqslant j\leqslant d\) the sequence \((f(n))_{n=0}^{\infty}\) satisfies the order \(j\) Turan inequality for all but finitely many values of \(n\)._
Proof.: At first, let us fix \(0\leqslant j\leqslant d\) and expand \(f(n+j)/f(n)\). We have that
\[\frac{f(n+j)}{f(n)} =\frac{c_{l}(n+j)^{l}+c_{l-1}(n+j)^{l-1}+\cdots+c_{l-d}(n+j)^{l-d} +o((n+j)^{l-d})}{c_{l}n^{l}+c_{l-1}n^{l-1}+\cdots+c_{l-d}n^{l-d}+o(n^{l-d})}\] \[=\frac{c_{l}n^{l}+c_{l-1}n^{l-1}\cdots+c_{l-d}n^{l-d}+o(n^{l-d})}{ c_{l}n^{l}+\cdots+c_{l-d}n^{l-d}+o(n^{l-d})}\] \[+j\cdot\frac{lc_{l}n^{l-1}+(l-1)c_{l-1}n^{l-2}+\cdots+(l-d)c_{l-d }n^{l-d-1}+o(n^{l-d})}{c_{l}n^{l}+\cdots+c_{l-d}n^{l-d}+o(n^{l-d})}\] \[+j^{2}\cdot\frac{\binom{l}{2}c_{l}n^{l-2}+\binom{l-1}{2}c_{l-1}n ^{l-3}+\cdots+\binom{l-d}{2}c_{l-d}n^{l-d-2}+o(n^{l-d})}{c_{l}n^{l}+\cdots+c_{ l-d}n^{l-d}+o(n^{l-d})}\] \[\vdots\] \[+j^{d}\cdot\frac{\binom{l}{d}c_{l}n^{l-d}+\binom{l-1}{d}c_{l-1}n ^{l-1-d}+\cdots+\binom{l-d}{d}c_{l-d}n^{l-2d}+o(n^{l-d})}{c_{l}n^{l}+\cdots+c_{ l-d}n^{l-d}+o(n^{l-d})}\] \[+\frac{o((n+j)^{l-d})}{c_{l}n^{l}+\cdots+c_{l-d}n^{l-d}+o(n^{l-d})}\] \[=\frac{c_{l}n^{l}+c_{l-1}n^{l-1}\cdots+c_{l-d}n^{l-d}+o(n^{l-d})}{ c_{l}n^{l}+\cdots+c_{l-d}n^{l-d}+o(n^{l-d})}\] \[+j\cdot\frac{1}{n}\frac{lc_{l}n^{l-1}+(l-1)c_{l-1}n^{l-2}+\cdots+ (l-d)c_{l-d}n^{l-d-1}+o(n^{l-d})}{c_{l}n^{l-1}+\cdots+c_{l-d-1}n^{l-d-1}+o(n^{ l-d})}\] \[+j^{2}\cdot\frac{1}{n^{2}}\frac{\binom{l}{2}c_{l}n^{l-2}+\binom{ l-1}{2}c_{l-1}n^{l-3}+\cdots+\binom{l-d}{2}c_{d-d}n^{l-d-2}+o(n^{l-d})}{c_{l}n^{l-2 }+\cdots+c_{l-d}n^{l-d-2}+o(n^{l-d-2})}\] \[\vdots\] \[+j^{d}\cdot\frac{1}{n^{d}}\frac{\binom{l}{d}c_{l}n^{l-d}+\binom{ l-1}{d}c_{l-1}n^{l-1-d}+\cdots+\binom{l-d}{d}c_{l-d}n^{l-2d}+o(n^{l-d})}{c_{l}n^{ l-d}+\cdots+c_{l-2d}n^{l-2d}+o(n^{l-2d})}\] \[+\frac{o((n+j)^{l-d})}{c_{l}n^{l}+\cdots+c_{l-d}n^{l-d}+o(n^{l-d})}.\]
Now, it is not difficult to see that we can apply Theorem 2.2 with \(\omega_{n}=f(n)\), \(E(n)=1\), and \(\delta(n)=n^{-1}\). Indeed, we get
\[\frac{f(n+j)}{f(n)}=\sum_{i=0}^{d}C_{i}(n)\left(\frac{1}{n}\right)^{i}j^{i}+o \left(\left(\frac{1}{n}\right)^{d}\right)\qquad\text{ as }n\to\infty,\]
where
\[C_{s}(n)=\frac{\binom{l}{s}c_{l}n^{l-s}+\binom{l-1}{s}c_{l-1}n^{l-1-s}+\cdots +\binom{l-d}{s}c_{l-d}n^{l-d-s}+o(n^{l-d})}{c_{l}n^{l-s}+\cdots+c_{l-d}n^{l-d -s}+o(n^{l-d-s})}\]
for any \(0\leqslant s\leqslant d\). Hence, it is clear that
\[\lim_{n\to\infty}C_{s}(n)=\binom{l}{s}\]
for every \(0\leqslant s\leqslant d\), and we obtain that
\[H_{F,m}(x)=m!\sum_{j=0}^{m}(-1)^{m-j}\binom{l}{m-j}\frac{x^{j}} {j!} =(-1)^{m}m!\sum_{j=0}^{m}\binom{m+(l-m)}{m-j}\frac{(-x)^{j}}{j!}\] \[=(-1)^{m}m!L_{m}^{(l-m)}(x)\]
for each \(0\leqslant m\leqslant d\), where \(L_{m}^{(l-m)}(x)\) is the generalized Laguerre polynomial. Since \(l-m\geqslant 0\), the polynomials \(L_{m}^{(l-m)}(x)\) have only positive real roots (see, the
antepenultimate paragraph of Section 2). Finally, Theorem 2.2 asserts that
\[\lim_{n\to\infty}\left(\frac{n^{s}}{f(n)}J_{f}^{s,n}\left(\frac{x}{n}-1\right) \right)=(-1)^{s}s!L_{s}^{(l-s)}(x),\]
uniformly for \(x\) in any compact subset of \(\mathbb{R}\) for every \(1\leqslant s\leqslant d\). However, we know that the property of a polynomial with real coefficients is invariant under small deformation. Thus, the required phenomenon for \(f(n)\) follows.
Theorem 2.1 and Theorem 3.1 deliver an interesting criterion for the order \(d\) Turan inequality for the \(A\)-partition function.
**Theorem 3.2**.: _Let \(A\) be a finite multiset (or set) of positive integers with \(\#A=k\), and let \(1\leqslant d<k\) be fixed. Suppose further that \(\gcd B=1\) for every \((k-d)\)-multisubset \(B\subset A\). Then, for any \(1\leqslant j\leqslant d\) the sequence \((p_{A}(n))_{n=0}^{\infty}\) fulfills the order \(j\) Turan inequality for all sufficiently large values of \(n\)._
Proof.: That is a direct consequence of both Theorem 2.1 and Theorem 3.1.
An interesting question arises whether Theorem 3.1 and Theorem 3.2 present also the necessary conditions for order \(d\) Turan inequality for both quasi-polynomial-like functions and \(A\)-partition functions, respectively. It is true for the order \(2\) Turan inequality which follows directly from Gajdzica's papers [26, 27]. However, it is not true in general as the forthcoming examples show.
**Example 3.3**.: _Let us investigate the order \(3\) Turan inequality for the function_
\[f(n)=\begin{cases}n^{15}+n^{14}+n^{13}+n^{12}+n^{11}+o(n^{11}),&\text{if }n \equiv 0\,(\mathrm{mod}\ 4),\\ n^{15}+n^{14}+n^{13}+2n^{12}+n^{11}+o(n^{11}),&\text{if }n\equiv 0\,(\mathrm{mod }\ 4).\end{cases}\]
_It is easy to see that the assumptions from Theorem 3.1 are not satisfied. Nevertheless, it turns out that the function which directly corresponds to the third order Turan inequality takes the form_
\[4\left(f(n)^{2}-f(n-1)f(n+1)\right)\left(f(n+1)^{2}-f(n)f(n+2)\right)\] \[-(f(n)f(n+1)-f(n-1)f(n+2))^{2}=\begin{cases}12411n^{54}+o(n^{54}),& \text{if }n\equiv 0\,(\mathrm{mod}\ 4),\\ 12771n^{54}+o(n^{54}),&\text{if }n\equiv 1\,(\mathrm{mod}\ 4),\\ 12539n^{54}+o(n^{54}),&\text{if }n\equiv 2\,(\mathrm{mod}\ 4),\\ 12659n^{54}+o(n^{54}),&\text{if }n\equiv 3\,(\mathrm{mod}\ 4);\end{cases}\]
_and is positive for all sufficiently large values of \(n\). Hence, we conclude that Theorem 1.3 is not an optimal criterion._
In the case of the \(A\)-partition function, we present the following counterexample.
**Example 3.4**.: _Let us assume that \(A=\{1,1,1,\,\text{\scriptsize$1$},300\}\), and examine the order \(4\) Turan inequality. In the fashion of Example 3.3, we wish to define a function \(f(n)\) which directly corresponds to that issue. It is tedious but elementary to show that a sequence \(\omega=\left(\omega_{i}\right)_{i=0}^{\infty}\) fulfills the fourth Turan inequality if the following_
\[54\omega_{n}\omega_{n+1}^{2}\omega_{n+2}\omega_{n+4}^{2}+\omega_ {n}^{3}\omega_{n+4}^{3}+108\omega_{n+1}^{3}\omega_{n+2}\omega_{n+3}\omega_{n+ 4}+36\omega_{n+1}^{2}\omega_{n+2}^{2}\omega_{n+3}^{2}\] \[-12\omega_{n}^{2}\omega_{n+1}\omega_{n+3}\omega_{n+4}^{2}-54 \omega_{n+1}^{2}\omega_{n+2}^{3}\omega_{n+4}-6\omega_{n}\omega_{n+1}^{2} \omega_{n+3}^{2}\omega_{n+4}-54\omega_{n}\omega_{n+2}^{3}\omega_{n+3}^{2}\] \[-27\omega_{n+1}^{4}\omega_{n+4}^{2}-180\omega_{n}\omega_{n+1} \omega_{n+2}^{2}\omega_{n+3}\omega_{n+4}-27\omega_{n}^{2}\omega_{n}^{4}+3108 \omega_{n}\omega_{n+1}\omega_{n+2}\omega_{n+3}^{3}\] \[-64\omega_{n+1}^{3}\omega_{n+3}^{3}-18\omega_{n}^{2}\omega_{n+2}^{2 }\omega_{n+4}^{2}+81\omega_{n}\omega_{n+2}^{4}\omega_{n+4}+54\omega_{n}^{2} \omega_{n+2}\omega_{n+3}^{2}\omega_{n+4}\geqslant 0\]
_is satisfied for every \(n\in\mathbb{N}\). Therefore, let us put \(\omega_{n}:=p_{A}(n)\) and denote the left hand side of the above inequality by \(f_{A}(n)\). Since \(\gcd(300)\neq 1\), we see that the
assumptions from Theorem 3.2 do not hold if \(d=4\). Notwithstanding, one can carry out the appropriate computations in Mathematica [54] and check that \(f_{A}(n)\) is a quasi-polynomial of degree \(12\) with coefficients depending on \(n\,(\mathrm{mod}\;300)\). It might be also verified that the leading coefficient of \(f_{A}(n)\) attains the smallest value whenever \(n\not\equiv 296\,(\mathrm{mod}\;300)\) -- in all of these cases, we have_
\[f_{A}(n)=\frac{n^{12}}{2^{18}\cdot 3^{9}\cdot 5^{12}}+o\left(n^{12}\right).\]
_The above discussion agrees with the plot of \(f_{A}(n)\) for \(1\leqslant n\leqslant 10^{4}\), see Figure 1._
_Hence, we conclude that Theorem 3.2 is not optimal, as well._
At the end of this section, let us exhibit two other examples. Sometimes we can not conclude the appropriate order \(d\) Turan inequality if the requirements from Theorem 3.1 or Theorem 3.2 are not satisfied.
**Example 3.5**.: _Let us consider a quasi-polynomial-like function of the form_
\[f(n)=\begin{cases}n^{15}+n^{14}+n^{13}+n^{12}+n^{11}+o(n^{11}),&\text{if }n \not\equiv 0\,(\mathrm{mod}\;4),\\ n^{15}+n^{14}+n^{13}+500n^{12}+n^{11}+o(n^{11}),&\text{if }n\equiv 0\,( \mathrm{mod}\;4).\end{cases}\]
_We would like to investigate the order \(3\) Turan inequality. However, it is clear that the assumptions from Theorem 3.1 are not satisfied; and one may calculate that_
\[4\left(f(n)^{2}-f(n-1)f(n+1)\right)\left(f(n+1)^{2}-f(n)f(n+2)\right)\] \[-\left(f(n)f(n+1)-f(n-1)f(n+2)\right)^{2}=-266341n^{54}+o(n^{54}),\]
_whenever \(n\equiv 2\,(\mathrm{mod}\;4)\). Hence, \(f(n)\) can not satisfy the order \(3\) Turan inequality for all sufficiently large values of \(n\), as required._
As an instance for an \(A\)-partition function, we take a finite analogue of the partition function \(p(n)\).
**Example 3.6**.: _For any positive integer \(m\), let us put \(A_{m}:=\{1,2,\ldots,m\}\). We want to consider the third order Turan inequality for \(p_{A_{6}}(n)\) and \(p_{A_{7}}(n)\). In order to make the text more transparent, we set_
\[g_{A_{m}}(n) :=4\left(p_{A_{m}}^{2}(n)-p_{A_{m}}(n-1)p_{A_{m}}(n+1)\right) \left(p_{A_{m}}^{2}(n+1)-p_{A_{m}}(n)p_{A_{m}}(n+2)\right)\] \[\quad-\left(p_{A_{m}}(n)p_{A_{m}}(n+1)-p_{A_{m}}(n-1)p_{A_{m}}(n+2 )\right)^{2}\]
_Thus, \(g_{A_{m}}(n)\) directly corresponds to the order \(3\) Turan inequality. It is clear that the demands from Theorem 3.2 are not true for \(A_{6}\). In fact, it turns out that, for instance,_
\[g_{A_{6}}(n)=-\frac{2069n^{14}}{2^{24}\cdot 3^{12}\cdot 5^{6}}+o\left(n^{14} \right),\]
_whenever \(n\equiv 2\,(\mathrm{mod}\ 60)\). On the other hand, one can check that the equality_
\[g_{A_{7}}(n)=\frac{n^{18}}{2^{28}\cdot 3^{14}\cdot 5^{7}\cdot 7^{4}}+o\left(n^{18}\right)\]
_is valid for every positive integer \(n\), as required. The above discussion agrees with the plots of \(g_{A_{6}}(n)\) and \(g_{A_{7}}(n)\), see Figure 3 and Figure 3, respectively._
## 4. The Laguerre inequalities for quasi-polynomial-like functions
Now, we focus on the Laguerre inequalities for quasi-polynomial-like functions. As it was mentioned at the end of Section 2, we need to use a few binomial coefficient identities to deal with the issue.
The first of them arises from comparing the coefficients of the expansions of both \((1-z)^{s}(1+z)^{s}\) and \((1-z^{2})^{s}\) (see, [29, Section 5.4]).
**Lemma 4.1**.: _Let \(s\in\mathbb{N}\) be fixed. Then for every even integer \(0\leqslant n\leqslant s\), we have_
\[\sum_{j=0}^{n}(-1)^{j}\binom{s}{j}\binom{s}{n-j}=(-1)^{\frac{n}{2}}\binom{s}{ n/2}.\]
To present the second one, we need to recall that the Stirling number of the second kind \(\genfrac{\{}{\}}{0.0pt}{}{n}{k}\) enumerates the number of ways to partition a set of \(n\) labelled objects into \(k\) non-empty unlabelled subsets. Equivalently, it is the number of different equivalence relations with exactly \(k\) equivalence classes that may be defined on an \(n\) element set. It is worth noting that the following identities
\[\genfrac{\{}{\}}{0.0pt}{}{n}{1}=1,\quad\genfrac{\{}{\}}{0.0pt}{}{n}{n}=1, \quad\genfrac{\{}{\}}{0.0pt}{}{n}{0}=0\quad\text{and}\quad\genfrac{\{}{\}}{0}{0} {0}=1\]
hold for every positive integer \(n\) as well as \(\genfrac{\{}{\}}{0.0pt}{}{m}{k}=0\) whenever \(0\leqslant m<k\). The succeeding lemma, together with the general introduction to the Stirling numbers, might be found in [29, Section 6.1].
**Lemma 4.2**.: _Let \(u\) and \(v\) be arbitrary non-negative integers. Then, we have_
\[u!\genfrac{\{}{\}}{0.0pt}{}{v}{u}=\sum_{k=0}^{u}(-1)^{u-k}\binom{u}{k}k^{v}.\]
Now, we are ready to state and prove the main result of this section.
**Theorem 4.3**.: _Let \(f(n)\) be a quasi-polynomial-like function of the form_
\[f(n)=c_{l}n^{l}+c_{l-1}n^{l-1}+\cdots+c_{l-2d}n^{l-2d}+o(n^{l-2d}),\]
_for some non-negative integer \(d\) such that \(2d\leqslant l\). Then, for every \(0\leqslant j\leqslant d\) the sequence \((f(n))_{n=0}^{\infty}\) satisfies the Laguerre inequality of order \(j\) for all but finitely many values of \(n\). In particular, we have that_
\[\sum_{i=0}^{2d}(-1)^{i+d}\binom{2d}{i}f(n+i)f(n+2d-i)=(2d)!\genfrac{\{}{\}}{0.0 pt}{}{l}c_{l}^{2}n^{2(l-d)}+o\left(n^{2(l-d)}\right).\]
Proof.: Let us fix a quasi-polynomial-like function \(f(x)\) as in the statement, and expand the left hand side of the inequality (1.4) with \(\omega_{n}=f(n)\). We have that
\[\sum_{j=0}^{2d}(-1)^{j+d}\binom{2d}{j}f(n+j)f(n+2d-j)\] \[= \sum_{j=0}^{2d}(-1)^{j+d}\binom{2d}{j}\left[c_{l}(n+j)^{l}+\cdots +c_{l-2d}(n+j)^{l-2d}+o\left((n+j)^{l-2d}\right)\right]\] \[\times\left[c_{l}(n+2d-j)^{l}+\cdots+c_{l-2d}(n+2d-j)^{l-2d}+o \left((n+2d-j)^{l-2d}\right)\right]\] \[= \sum_{j=0}^{2d}(-1)^{j+d}\binom{2d}{j}\left[c_{l}\sum_{i=0}^{2d} \binom{l}{i}j^{i}n^{l-i}+\cdots+c_{l-2d}n^{l-2d}+o(n^{l-2d})\right]\] \[\times\left[c_{l}\sum_{i=0}^{2d}(-1)^{i}\binom{l}{i}j^{i}(n+2d)^{ l-i}+\cdots+c_{l-2d}(n+2d)^{l-2d}+o(n^{l-2d})\right].\]
Since we are interested in the asymptotic behavior of the above expression, we need to determine the leading coefficient of its polynomial part. It is not difficult to notice that whenever we multiply a summand \(\gamma_{i_{0},k_{0}}j^{k_{0}}n^{l-i_{0}}\) from the first square bracket with a summand \(\delta_{i_{1},k_{1}}j^{k_{1}}(n+2d)^{l-i_{1}}\) from the second one (where the coefficients \(\gamma_{i_{0},k_{0}}\) and \(\delta_{i_{1},k_{1}}\) are independent of both \(j\) and \(n\)), we can obtain at most \(j\) to the power \(i_{0}+i_{1}\geqslant k_{0}+k_{1}\). More precisely, we get an expression of the form
\[\sum_{j=0}^{2d}(-1)^{j+d}\binom{2d}{j}\gamma_{i_{0},k_{0}}\delta_{i_{1},k_{1} }n^{l-i_{0}}(n+2d)^{l-i_{1}}j^{k_{0}+k_{1}}, \tag{4.1}\]
where \(0\leqslant k_{0}+k_{1}\leqslant i_{0}+i_{1}\). Therefore if \(i_{0}+i_{1}<2d\), then (4.1) might be rewritten as
\[(-1)^{d}\gamma_{i_{0},k_{0}}\delta_{i_{1},k_{1}}n^{l-i_{0}}(n+2d)^{l-i_{1}} \sum_{j=0}^{2d}(-1)^{2d-j}\binom{2d}{j}j^{k_{0}+k_{1}}=0,\]
where the equality follows from Lemma 4.2 with \(k=j\), \(u=2d\) and \(v=k_{0}+k_{1}\). Hence, our task boils down to finding the coefficient of \(n^{2(l-d)}\). Repeating the above
discussion, one can observe that the only possible non-zero term takes the form
\[\sum_{j=0}^{2d}(-1)^{j+d}\binom{2d}{j}c_{l}^{2}\sum_{i=0}^{2d}(-1)^{i }\binom{l}{i}\binom{l}{2d-i}j^{2d}n^{2(l-d)}\] \[=(-1)^{d}c_{l}^{2}\times\sum_{i=0}^{2d}(-1)^{i}\binom{l}{i}\binom{l }{2d-i}\times\sum_{j=0}^{2d}(-1)^{2d-j}\binom{2d}{j}j^{2d}.\]
Now, Lemma 4.1 asserts that
\[\sum_{i=0}^{2d}(-1)^{i}\binom{l}{i}\binom{l}{2d-i}=(-1)^{d}\binom{l}{d}.\]
On the other hand, Lemma 4.2 maintains that
\[\sum_{j=0}^{2d}(-1)^{2d-j}\binom{2d}{j}j^{2d}=(2d)!\left\{\begin{matrix}2d\\ 2d\end{matrix}\right\}=(2d)!.\]
In conclusion, we obtain that
\[\sum_{j=0}^{2d}(-1)^{j+d}\binom{2d}{j}f(n+j)f(n+2d-j)=(2d)!\left(\begin{matrix} l\\ d\end{matrix}\right)c_{l}^{2}n^{2(l-d)}+o\left(n^{2(l-d)}\right),\]
which was to be demonstrated.
As an immediate consequence of Theorem 4.3, we get an analogue characterization to that one from Theorem 3.2.
**Theorem 4.4**.: _Let \(A\) be a finite multiset (or set) of positive integers with \(\#A=k\), and let \(1\leqslant 2d<k\) be fixed. Suppose further that \(\gcd B=1\) for every \((k-2d)\)-multisubset \(B\subset A\). Then, for each \(1\leqslant j\leqslant d\) the sequence \((p_{A}(n))_{n=0}^{\infty}\) satisfies the Laguerre inequality of order \(j\) for all but finitely many values of \(n\)._
Proof.: The criterion easily follows from both Theorem 2.1 and Theorem 4.3.
Analogously to Section 3, we present a few examples showing that Theorem 4.3, as well as Theorem 4.4, does not deliver us a necessary condition for the Laguerre inequality of order \(d\) for \(d\geqslant 2\).
**Example 4.5**.: _Let us assume that \(f(n)\) is a quasi-polynomial-like function of the form_
\[f(n)=n^{10}+n^{9}+n^{8}+n^{7}+(n\:(\mathrm{mod}\:5))\cdot n^{6}+o(n^{6}).\]
_It is not difficult to see that the assumption from Theorem 4.3 for \(d=2\) does not hold. Nevertheless, one can calculate that_
\[3f(n+2)^{2}-4f(n+1)f(n+3)+f(n)f(n+4)\geqslant 525n^{16}+o(n^{16})\]
_for all sufficiently large values of \(n\), and observe that the second Laguerre inequality is asymptotically satisfied for \(f(n)\). Thus, Theorem 4.3 is not an optimal criterion._
As a counterexample for Theorem 4.4, we exhibit the following.
**Example 4.6**.: _We put \(A=\{1,1,1,\downarrow 1,300\}\) and consider the order \(2\) Laguerre inequality. It is clear that the assumptions from Theorem 4.4 are not satisfied for \(d=2\). Notwithstanding, if we set_
\[h_{A}(n):=3p_{A}^{2}(n+2)-4p_{A}(n+1)p_{A}(n+3)+p_{A}(n)p_{A}(n+4),\]
_then it turns out that_
\[h_{A}(n)\geqslant\frac{n^{4}}{2^{3}\cdot 3^{3}\cdot 5^{4}}+o(n^{4})\]
_with the equality whenever \(n\not\equiv 297\,(\mathrm{mod}\ 300)\), which agrees with Figure 4._
_In conclusion, we see that Theorem 4.4 is not an optimal criterion, as well._
At the end of this section, we present an example showing that, in general, it might be difficult to derive an optimal criterion for the Laguerre inequality of order \(d\geqslant 2\) for quasi-polynomial-like functions.
**Example 4.7**.: _Let us consider the \(A_{m}\)-partition function defined in Example 3.6. For instance, we may examine the Laguerre inequality of order \(2\) for both \(p_{A_{8}}(n)\) and \(p_{A_{9}}(n)\). For the sake of clarity, let us put_
\[h_{A_{m}}(n):=3p_{A_{m}}^{2}(n+2)-4p_{A_{m}}(n+1)p_{A_{m}}(n+3)+p_{A_{m}}(n)p_{ A_{m}}(n+4).\]
_In other words, \(h_{A_{m}}(n)\) corresponds to the second order Laguerre inequality for \(p_{A_{m}}(n)\). We see that the assumptions from Theorem 4.4 do not hold for \(p_{A_{8}}(n)\) and \(d=2\). Moreover, one can determine that_
\[h_{A_{8}}(n)=-\frac{349n^{10}}{2^{20}\cdot 3^{6}\cdot 5^{4}\cdot 7^{3}}+o\left(n ^{10}\right),\]
_whenever \(n\equiv 0,2,\ldots,838\,(\mathrm{mod}\ 840)\). In the case of \(h_{A_{9}}(n)\), on the other hand, we get that the equality_
\[h_{A_{9}}(n)=\frac{n^{12}}{2^{24}\cdot 3^{11}\cdot 5^{4}\cdot 7^{3}}+o\left(n ^{12}\right)\]
_holds for each positive integer \(n\), which agrees with Theorem 4.4. Figure 5 and Figure 6 exhibit the plots of \(h_{A_{8}}(n)\) and \(h_{A_{9}}(n)\), respectively._
_The above discussion asserts that it might be difficult to find out an easy description of all quasi-polynomial-like functions which (asymptotically) fulfill the Laguerre inequality of order \(d\) for any \(d\geqslant 2\)._
Figure 4. The values of \(h_{A}(n)\) for \(A=\{1,1,1,1,300\}\) and \(1\leqslant n\leqslant 10^{4}\).
## 5. Concluding remarks
It is quite unfortunate that neither Theorem 3.1 nor Theorem 3.2 delivers necessary conditions for the order \(d\) Turan inequality. Analogously, neither Theorem 4.3 nor Theorem 4.4 contains necessary conditions for the Laguerre inequality of order \(d\). It is worth pointing out that we have such a result in the case of the \(r\)-log-concavity problem for quasi-polynomial-like functions (and, in particular, \(A\)-partition functions) [27]. Recall that a sequence of real numbers \(\omega=\left(w_{i}\right)_{i=0}^{\infty}\) is called (asymptotically) \(r\)-log-concave for \(r\in\mathbb{N}_{+}\), if there exists an integer \(N\) such that all terms of the sequences
\[\widehat{\mathcal{L}}\omega,\widehat{\mathcal{L}}^{2}\omega,\ldots,\widehat{ \mathcal{L}}^{r}\omega\]
are positive for every \(i\geqslant N\), where
\[\widehat{\mathcal{L}}\omega=\left(w_{i+1}^{2}-w_{i}w_{i+2}\right)_{i=0}^{ \infty}\text{ and }\widehat{\mathcal{L}}^{k}\omega=\widehat{\mathcal{L}}\left(\widehat{ \mathcal{L}}^{k-1}\omega\right)\]
for \(k\in\{2,3,\ldots,r\}\). We have the following characterization for that issue.
**Theorem 5.1** (Gajdzica).: _Let \(l\) and \(r\) be arbitrary positive integers such that \(l\geqslant 2r\). Suppose further that we have_
\[f(n)=a_{l}(n)n^{l}+a_{l-1}(n)n^{l-1}+\cdots+a_{l-2r}(n)n^{l-2r}+o\left(n^{l-2r }\right),\]
_where the coefficients \(a_{l-2r}(n),\ldots,a_{l}(n)\) might depend on the residue class of \(n\left(\mathrm{mod}\ M\right)\) for some positive integer \(M\geqslant 2\). Then the sequence \(\left(f(n)\right)_{n=0}^{\infty}\) is asymptotically \(r\)-log-concave if and only if all the numbers \(a_{l-2r}(n),\ldots,a_{l}(n)\) are independent of the residue class of \(n\left(\mathrm{mod}\ M\right)\)._
Unfortunately, the analogous descriptions are impossible for both the higher order Turan inequalities and the Laguerre inequalities. For the former, if we assume that
\[f(n)=c_{l}n^{l}+c_{l-1}n^{l-1}+\cdots+c_{l-d+1}n^{l-d+1}+o(n^{l-d+1})\]
for some \(1\leqslant d\leqslant l\). Then, the leading coefficient of the (quasi-polynomial-like) function which corresponds to the \(d\)th Turan inequality may heavily depend on the residue class of \(n\) modulo some positive integer, see Example 3.4. To visualize the issue more accurately, let us consider the third order Turan inequality for
\[f(n)=c_{l}n^{l}+c_{l-1}n^{l-1}+c_{l-2}n^{l-2}+c_{l-3}(n)n^{l-3}+o(n^{l-3}),\]
where \(l\geqslant 3\) and \(c_{l-3}(n)\) depends on the residue class of \(n\pmod{M}\) for some \(M\geqslant 2\). It is tiresome but elementary to show that we have the following:
\[4\left(f(n)^{2}-f(n-1)f(n+1)\right)\left(f(n+1)^{2}-f(n)f(n+2) \right)-\left(f(n)f(n+1)-f(n-1)f(n+2)\right)^{2}\] \[=\big{[}-6c_{l-3}(n-1)c_{l-3}(n+1)+2c_{l-3}(n-1)c_{l-3}(n+2)+4lc_ {lcl-3}(n-1)-c_{l-3}^{2}(n-1)\] \[\quad+6c_{l-3}(n-1)c_{l-3}(n)+6c_{l-3}(n+1)c_{l-3}(n+2)+12lc_{l}c_ {l-3}(n+1)-9c_{l-3}^{2}(n+1)\] \[\quad+18c_{l-3}(n)c_{l-3}(n+1)-4lc_{l}c_{l-3}(n+2)-c_{l-3}^{2}(n+ 2)-6c_{l-3}(n)c_{l-3}(n+2)\] \[\quad+4l^{3}c_{l}^{2}-4l^{2}c_{l}^{2}-12lc_{l}c_{l-3}(n)-9c_{l-3} ^{2}(n)\big{]}c_{l}^{2}n^{4l-6}+o(n^{4l-6}).\]
Thus, it is easy to see that the leading coefficient of that expression intensely depends on the residue class of \(n\pmod{M}\), which coincides with our discussion above.
One can demonstrate a parallel reasoning to deduce that the same problem plagues us if we deal with the Laguerre inequality of order \(d\) for \(d\geqslant 2\), which Example 4.6 indicates.
At the end of the manuscript, we state a few open problems. The first of them encourages us to deal with the higher order Turan inequalities for some particular \(A\)-partition functions.
**Problem 5.2**.: _Fix a set (or multiset) \(A\) of positive integers and investigate the higher order Turan inequalities for the \(A\)-partition function._
**Remark 5.3**.: For instance, if we set \(A=A_{m}=\{1,2,\ldots,m\}\) in Problem 5.2, then it is known [26] that the second order Turan inequality for \(p_{A_{m}}(n)\) begins to hold for \(m=5\). In that case we have that
\[p_{A_{5}}^{2}(n)\geqslant p_{A_{5}}(n-1)p_{A_{5}}(n+1)\]
for all \(n>37\).
One can extend the above and deal with the problem for the \(A_{m}^{(l)}\)-partition function, where \(A_{m}^{(l)}=\{1^{l},2^{l},\ldots,m^{l}\}\) and \(m\in\mathbb{N}_{+}\cup\{\infty\}\). The case of \(m=\infty\) and \(l=1\) has been investigated by Griffin et al. [28] and Larson and Wagner [35]. For more information about the general setting (when \(m=\infty\)), we refer the reader to Ulas' paper [50].
We also hope that there is a chance to discover a more efficient criterion than Theorem 3.1, and state the following.
**Problem 5.4**.: _Let \(d\geqslant 3\) be arbitrary. Find a more effective criterion than Theorem 3.1 (or Theorem 3.2) for the order \(d\) Turan inequality for quasi-polynomial-like functions. Alternatively, do that for small values of the parameter \(d\)._
Finally, we formulate the analogues of both Problem 5.2 and Problem 5.4 in the context of Laguerre inequalities for quasi-polynomial-like functions.
**Problem 5.5**.: _Fix a set (or multiset) \(A\) of positive integers and investigate the Laguerre inequalities for the \(A\)-partition function._
**Problem 5.6**.: _Let \(d\geqslant 2\) be arbitrary. Derive a more efficient criterion than Theorem 4.3 (or Theorem 4.4) for the \(d\)th Laguerre inequality for quasi-polynomial-like functions. Alternatively, do that for small values of the parameter \(d\)._
## Acknowledgements
I wish to express my sincere thanks to Piotr Miska and Maciej Ulas for their time and helpful suggestions. I am also grateful to Ian Wagner for his additional comments. This research was funded by both a grant of the National Science Centre (NCN), Poland, no. UMO-2019/34/E/ST1/00094 and a grant from the Faculty of Mathematics and Computer Science under the Strategic Program Excellence Initiative at the Jagiellonian University in Krakow. | この論文は、 quasi-polynomial-like 関数 ( $f(n)=c_l(n)n^l+\cdots+c_d(n)n^d+o(n^d)$, $d,l\in\mathbb{N}$ , $d\leq l$ ) について、上位階数の Tur\'an 不等式と Laguerre 不等式について扱っています。特に、A-partition function $p_A(n)$ は、$A=\{a_1,a_2,\ldots,a_k\}$ の有限多セットの各部分で構成される $n$ の分割の数を表す例であり、d に対して、この論文では、$d$th Tur\'anine quality と $d$th Laguarre inequality に対する効率的な判定基準を提示しています。この結果を利用して、$p_A(n)$ の非平凡な類似物を導出します。 |
2301.05937 | A tensor SVD-like decomposition based on the semi-tensor product of
tensors | In this paper, we define a semi-tensor product for third-order tensors. Based
on this definition, we present a new type of tensor decomposition strategy and
give the specific algorithm. This decomposition strategy actually generalizes
the tensor SVD based on semi-tensor product. Due to the characteristic of
semi-tensor product for compressing the data scale, we can therefore achieve
data compression in this way. Numerical comparisons are given to show the
advantages of this decomposition. | Zhuo-Ran Chen, Seak-Weng Vong, Ze-Jia Xie | 2023-01-14T15:54:37 | http://arxiv.org/abs/2301.05937v1 | # A tensor SVD-like decomposition based on the semi-tensor product of tensors
###### Abstract
In this paper, we define a semi-tensor product for third-order tensors. Based on this definition, we present a new type of tensor decomposition strategy and give the specific algorithm. This decomposition strategy actually generalizes the tensor SVD based on semi-tensor product. Due to the characteristic of semi-tensor product for compressing the data scale, we can therefore achieve data compression in this way. Numerical comparisons are given to show the advantages of this decomposition.
keywords: Semi-tensor product of tensors, Tensor decomposition, Singular value decomposition, Third-order tensors Msc: 15A69, 65F99 +
Footnote †: journal:
## 1 Introduction
Nowadays, many kinds of fields need to collect and apply a large amount of data, such as image and video processing, medical treatment and engineering. In many cases, data are multidimensional, such as the storage of color pictures, video clips, etc. However, two-dimensional matrices are not enough when analyzing and processing these data, tensors are therefore introduced to analyze multidimensional data. From that, the storage and decomposition of tensor is a very important research content. In practical application, we often need to store and process the huge number of data, so how to reduce the storage space and improve operation speed is also a very important research area for us.
CANDECOMP/PARAFAC (CP) [3; 8] and Tucker [23] decompositions are two well-known tensor decomposition strategies. They are higher-order extensions of the matrix singular value decomposition (SVD). The CP model can decompose the target tensor into the sum of rank-one tensors. The Tucker decomposition applies the n-mode multiplication of tensors to decompose a target tensor into a core tensor multiplied by some matrices along their modes. In fact, many other
tensor decompositions [17] have been developed, including INDSCAL [3], PARAFAC2 [9], CANDELINC [4], DEDICOM [10], PARATUCK2 [11], and so on. In 2008, Kilmer, Martin, and Perrone [15] presented a so-called t-product and developed a new decompositon based on this t-product. Such decomposition can write a third-order tensor as products of three other third-order tensors. In order to define this new notion, they give a new definition of tensor-tensor multiplication firstly. This decomposition called T-SVD, is analogous to the SVD in the matrix case, and it could be used in field of data compression.
In this paper, we define a new tensor multiplication for third-order tensors which extends the matrix semi-tensor product to tensors. Besides, we introduce a new tensor decomposition algorithm based on the semi-tensor product which can decompose the target tensor into three other tensors and multiply. We first give the matrix SVD based on semi-tensor product, and then generalize it to tensor. This decomposition will reduce the storage of data and improve the speed of operation to a certain extent, especially for the tensors with huge amounts of data.
Our paper is organized as follows. In Section 2, we introduce the notation and preliminaries that we need to use throughout the paper. In Section 3, we describe the new semi-tensor product that we define in detail, including some useful properties, lemmas, and theorems. In Section 4, we will use the tensor product introduced in Section 3 to decompose a third-order tensor so that the target third-order tensor can be written in the form of three tensor products and multiply. In Section 5, we give the application in image processing by using the algorithm introduced in this paper. And conclusions are made in Section 6.
## 2 Notations and Preliminaries
In this section, we will give a summary of notations and basic preliminaries that we will use. The arrangement of data in the same direction is called a one-way array. A tensor is a multi-way array representation of data, it is a multi-way array or a multi-dimensional array, which is an extension of a matrix. The tensor represented by a \(p\)-way array is a tensor of order \(p\). An order-\(p\) tensor \(\mathcal{A}\) can be written as \(\mathcal{A}=\left(a_{i_{1}i_{2}\dots i_{p}}\right)\in\mathbb{C}^{n_{1}\times n _{2}\times\dots\times n_{p}}\)[12], where \(a_{i_{1}i_{2}\dots i_{p}}\) denotes the \((i_{1}i_{2}\dots i_{p})\)-th entry of \(\mathcal{A}\). Thus, a matrix can be considered as a second-order tensor, and a vector is a first-order tensor.
For a matrix \(A\in\mathbb{C}^{n_{1}\times n_{2}}\), we use \(A^{T}\) and \(A^{H}\) to denote the transpose and conjugate transpose of \(A\), respectively. For a third-order tensor \(\mathcal{A}=(a_{ijk})\in\mathbb{C}^{n_{1}\times n_{2}\times n_{3}}\), we can use the set of matrices to denote its horizontal, lateral, and frontal slices. The \(i\)-th horizontal slice, \(j\)-th lateral slice, and \(k\)-th frontal slice of the third-order tensor \(\mathcal{A}\) can be denoted by \(\mathcal{A}\left(i,:,:\right)\), \(\mathcal{A}\left(:,j,:\right)\), and \(\mathcal{A}\left(:,:,k\right)\), respectively. Each slice is a matrix actually. The slice graph of a third-order tensor is shown in Figure 1.
In fact, if \(\mathcal{A}\in\mathbb{C}^{n_{1}\times n_{2}\times n_{3}}\) is a third-order tensor, then it has \(n_{3}\) frontal slices, and each frontal slice is an \(n_{1}\times n_{2}\) matrix. These \(n_{3}\) frontal slices of \(\mathcal{A}\) can be represented by \(A_{1}=\mathcal{A}\left(:,:,1\right)\), \(A_{2}=\mathcal{A}\left(:,:,2\right)\),..., \(A_{n_{3}}=\mathcal{A}\left(:,:,n_{3}\right)\)[16]. We give a \(2\times 2\times 2\) example in Figure 2.
The semi-tensor product of matrices was proposed by Cheng [5]. As we know, the multiplication of two matrices requires the strict matching condition. While the semi-tensor product release the requirement for dimensions, it only requires dimensions to be multiples. See Definition 2.2 for more detail. The semi-tensor product includes left semi-tensor product and right semi-tensor product. We only discuss the left semi-tensor product in this paper.
**Definition 2.1**.: _(Left Semi-tensor Product of Vectors [5]) Let \(\mathbf{x}=\left[x_{1},x_{2},\dots,x_{p}\right]^{T}\in\mathbb{C}^{p}\) and \(\mathbf{y}=\left[y_{1},y_{2},\dots,y_{q}\right]^{T}\in\mathbb{C}^{q}\). Then we define left semi-tensor product of two vectors \(\mathbf{x}^{T}\) and \(\mathbf{y}\) in the
following. \((i)\) If \(p=nq,n\in\mathbb{Z}^{+}\), then we divide the row vector \(\mathbf{x}^{T}\) into \(q\) blocks: \(\mathbf{x}^{T}_{1},\mathbf{x}^{T}_{2},\ldots,\mathbf{x}^{T}_{q}\). Each block \(\mathbf{x}^{T}_{i}\) is an \(n\)-dimensional row vector, and is multiplied by \(y_{i}\), respectively, and then add all. We can get the left semi-tensor product of \(\mathbf{x}^{T}\) and \(\mathbf{y}\) which is an \(n\)-dimensional row vector. It can be represented as
\[\mathbf{x}^{T}\ltimes\mathbf{y}=\sum_{i=1}^{q}\mathbf{x}^{T}_{i}y_{i}\in \mathbb{C}^{1\times n}.\]
\((ii)\) If \(p=\frac{1}{n}q,n\in\mathbb{Z}^{+}\), then we divide the column vector \(\mathbf{y}\) into \(p\) blocks: \(\mathbf{y}_{1},\mathbf{y}_{2},\ldots,\mathbf{y}_{p}\). Each block \(\mathbf{y}_{i}\) is an \(n\)-dimensional column vector, and is multiplied by \(x_{i}\), respectively, and then add all. We can get the left semi-tensor product of \(\mathbf{x}^{T}\) and \(\mathbf{y}\) which is an \(n\)-dimensional column vector. It can be represented as
Figure 1: Slice maps of an \(n_{1}\times n_{2}\times n_{3}\) third-order tensor \(\mathcal{A}\).
\[\mathbf{x}^{T}\ltimes\mathbf{y}=\sum_{i=1}^{p}x_{i}\mathbf{y}_{i}\in\mathbb{C}^{n}.\]
**Definition 2.2**.: _(Left Semi-tensor Product of Matrices [5]) Let \(A\in\mathbb{C}^{m\times n}\), \(B\in\mathbb{C}^{s\times t}\). If \(n=ks\), or \(n=\frac{1}{k}s\), \(k\in\mathbb{Z}^{+}\), then \(D=A\ltimes B\) is a block matrix which has \(m\times t\) blocks, called the left semi-tensor product of \(A\) and \(B\). Each block can be represented as_
\[D^{ij}=A^{i}\ltimes B_{j},\quad i=1,2,\ldots,m,j=1,2,\ldots,t,\]
_where \(A^{i}\) is the i-th row of \(A\), \(B_{j}\) is the j-th column of \(B\)._
Only when \(n\) and \(s\) are integer multiples, Definition 2.2 is well-defined. If \(n=ks\), \(D=A\ltimes B\) is an \(m\times kt\) matrix; If \(n=\frac{1}{k}s\), \(D=A\ltimes B\) is a \(km\times t\) matrix; If \(n=s\), the left semi-product becomes the general matrix product.
**Definition 2.3**.: _(Kronecker Product of Matrices [2]) Suppose \(A\) is an \(m\times n\) matrix, and \(B\) is an \(s\times t\) matrix. Then \(C=A\otimes B\) is an \(ms\times nt\) matrix, called the Kronecker product of \(A\) and \(B\). It is defined as_
\[A\otimes B=\begin{bmatrix}a_{11}B&a_{12}B&\cdots&a_{1n}B\\ a_{21}B&a_{22}B&\cdots&a_{2n}B\\ \vdots&\vdots&\ddots&\vdots\\ a_{m1}B&a_{m2}B&\cdots&a_{mn}B\end{bmatrix}.\]
We give some basic properties of the matrix Kronecker product in the following.
**Lemma 2.1**.: _([2, 19]) Suppose \(A,B,C,D\) are four matrices of proper dimensions, and \(\alpha\) is a scalar. We have_
\((i)\)_\(AB\otimes CD=(A\otimes C)(B\otimes D);\)_
\((ii)\)_\(A\otimes(B\pm C)=(A\otimes B)\pm(A\otimes C)\), \((B\pm C)\otimes A=B\otimes A\pm C\otimes A;\)_
\((iii)\)_\((A\otimes B)^{T}=A^{T}\otimes B^{T}\), \((A\otimes B)^{H}=A^{H}\otimes B^{H};\)_
\((iv)\)_\((A\otimes B)^{-1}=A^{-1}\otimes B^{-1}\) if \(A\) and \(B\) are invertible;_
\((v)\)_\((A\otimes B)\otimes C=A\otimes(B\otimes C);\)_
\((vi)\)_\((\alpha A)\otimes B=A\otimes(\alpha B)=\alpha\left(A\otimes B\right).\)_
We use \(I_{n}\) to denote the \(n\times n\) identity matirx, then in Definition 2.2, the semi-tensor product can be represented by Kronecker product as follows.
**Lemma 2.2**.: _([6]) Let \(A\) be an \(m\times n\) matrix, and \(B\) be an \(s\times t\) matrix. Suppose that they have proper dimensions such that the semi-tensor product is well-defined. Then we have \((i)\) If \(n=ks\), \(k\in\mathbb{Z}^{+}\), then \(A\ltimes B=A(B\otimes I_{k})\) is an \(m\times kt\) matrix; \((ii)\)If \(n=\frac{1}{k}s\), \(k\in\mathbb{Z}^{+}\), then \(A\ltimes B=(A\otimes I_{k})B\) is a \(km\times t\) matrix._
**Lemma 2.3**.: _([6]) Suppose \(A\), \(B\), \(C\) are matrices which have proper dimensions such that the \(\ltimes\) is well-defined. Then we have_
\[(A\ltimes B)\ltimes C=A\ltimes(B\ltimes C).\]
Lemma 2.3 is called the associative law of semi-tensor product of matrices.
We know that if \(\mathbf{v}=\left[v_{\mathit{0}},v_{\mathit{1}},v_{\mathit{2}},v_{\mathit{3}} \right]^{T}\) is a column vector, then
\[\mathrm{circ}\left(\mathbf{v}\right)=\begin{bmatrix}v_{\mathit{0}}&v_{ \mathit{3}}&v_{\mathit{2}}&v_{\mathit{1}}\\ v_{\mathit{1}}&v_{\mathit{0}}&v_{\mathit{3}}&v_{\mathit{2}}\\ v_{\mathit{2}}&v_{\mathit{1}}&v_{\mathit{0}}&v_{\mathit{3}}\\ v_{\mathit{3}}&v_{\mathit{2}}&v_{\mathit{1}}&v_{\mathit{0}}\end{bmatrix}\]
is a circulant matrix. Note that the matrix is determined by the first column of the vector \(\mathbf{v}\).
Suppose \(\mathbf{v}\) is an \(n\times 1\) column vector, then \(\mathrm{circ}\left(\mathbf{v}\right)\) is an \(n\times n\) circulant matrix, we can use normalized Fourier transform [7] to change this circulant matrix into a diagonal matrix. That is multiplying \(n\times n\) Fourier matrix on the left and right sides of the circulant matrix, respectively. If \(F_{n}\) is an \(n\times n\) Fourier matrix, then \(F_{n}\mathrm{circ}\left(\mathbf{v}\right)F_{n}^{H}\) is a diagonal matrix, and the diagonal of \(F_{n}\mathrm{circ}\left(\mathbf{v}\right)F_{n}^{H}\) is the Fourier transform result of the vector \(\mathbf{v}\). Besides, the Fourier transform also can convert a block-circulant matrix into a block-diagonal matrix.
In [15], the **fold** and **unfold** operators are defined. Suppose \(\mathcal{A}\in\mathbb{C}^{n_{1}\times n_{2}\times n_{3}}\) is a third-order tensor, and its frontal slices are denoted by \(\mathcal{A}\left(:,:,1\right)\), \(\mathcal{A}\left(:,:,2\right)\),..., \(\mathcal{A}\left(:,:,n_{3}\right)\). Then **unfold** is defined as
\[\mathrm{unfold}\left(\mathcal{A}\right)=\begin{bmatrix}\mathcal{A}\left(:,:,1 \right)\\ \mathcal{A}\left(:,:,2\right)\\ \vdots\\ \mathcal{A}\left(:,:,n_{3}\right)\end{bmatrix}\in\mathbb{C}^{n_{1}n_{3}\times n _{2}}.\]
We can easily see that \(\mathrm{unfold}\left(\mathcal{A}\right)\) is an \(n_{1}n_{3}\times n_{2}\) matrix. And its schematic is shown in Figure 3. Another operator **fold** is the inverse of **unfold** which is defined as
\[\mathrm{fold}\left(\mathrm{unfold}(\mathcal{A})\right)=\mathrm{fold}\left( \begin{bmatrix}\mathcal{A}\left(:,:,1\right)\\ \mathcal{A}\left(:,:,2\right)\\ \vdots\\ \mathcal{A}\left(:,:,n_{3}\right)\end{bmatrix}\right)=\mathcal{A}.\]
More specifically, we give an example of a \(2\times 2\times 2\) tensor in Figure 4.
By using the unfold operator, we can create an \(n_{1}n_{3}\times n_{2}n_{3}\) block-circulant matrix denoted by \(\mathrm{circ}\left(\mathrm{unfold}\left(\mathcal{A}\right)\right)\) with the frontal slices of \(\mathcal{A}\). It is given by
\[\mathrm{circ}\left(\mathrm{unfold}\left(\mathcal{A}\right)\right):=\begin{bmatrix} \mathcal{A}\left(:,:,1\right)&\mathcal{A}\left(:,:,n_{3}\right)&\cdots& \mathcal{A}\left(:,:,2\right)\\ \mathcal{A}\left(:,:,2\right)&\mathcal{A}\left(:,:,1\right)&\cdots&\mathcal{A} \left(:,:,3\right)\\ \vdots&\vdots&\ddots&\vdots\\ \mathcal{A}\left(:,:,n_{3}\right)&\mathcal{A}\left(:,:,n_{3}-1\right)&\cdots& \mathcal{A}\left(:,:,1\right)\end{bmatrix}.\]
Also, we denote the inverse operator of \(\mathbf{circ}\) as \(\mathbf{circ}^{-1}\) which works on block-circulant matrices. For example, we have
\[\mathrm{circ}^{-1}(\mathrm{circ}\left(\mathrm{unfold}\left(\mathcal{A}\right) \right))=\begin{bmatrix}\mathcal{A}\left(:,:,1\right)\\ \mathcal{A}\left(:,:,2\right)\\ \vdots\\ \mathcal{A}\left(:,:,n_{3}\right)\end{bmatrix}.\]
Kilmer, Martin, and Perrone [15] presented a new tensor multiplication called t-product. The t-product applies **fold**, **unfold**, and \(\mathbf{circ}\) operators and requests that two third-order tensors have matching dimensions. This tensor multiplication can be described in the following.
**Definition 2.4**.: _(T-product [14, 15]) Let \(\mathcal{A}\) is an \(n_{1}\times n_{2}\times n_{3}\) third-order tensor and \(\mathcal{B}\) is an \(n_{2}\times l\times n_{3}\) third-order tensor, then the product of \(\mathcal{A}*\mathcal{B}\) is an \(n_{1}\times l\times n_{3}\) tensor can be represented as_
\[\mathcal{A}*\mathcal{B}=\mathrm{fold}\left[\mathrm{circ}\left(\mathrm{unfold} \left(\mathcal{A}\right)\right)\cdot\mathrm{unfold}\left(\mathcal{B}\right) \right],\]
where \(*\) denotes the new tensor multiplication of two third-order tensors called t-product, and \(\cdot\) denotes the general matrix product.
**Example 2.2**.: Suppose \(\mathcal{A}\) is an \(n_{1}\times n_{2}\times 3\) third-order tensor and \(\mathcal{B}\) is an \(n_{2}\times l\times 3\) third-order tensor, then
\[\mathcal{A}*\mathcal{B} =\mathrm{fold}\left[\mathrm{circ}\left(\mathrm{unfold}\left( \mathcal{A}\right)\right)\cdot\mathrm{unfold}\left(\mathcal{B}\right)\right]\] \[=\mathrm{fold}\left(\begin{bmatrix}\mathcal{A}(:,:,1)&\mathcal{ A}(:,:,3)&\mathcal{A}(:,:,2)\\ \mathcal{A}(:,:,2)&\mathcal{A}(:,:,1)&\mathcal{A}(:,:,3)\\ \mathcal{A}(:,:,3)&\mathcal{A}(:,:,2)&\mathcal{A}(:,:,1)\end{bmatrix}\cdot \begin{bmatrix}\mathcal{B}(:,:,1)\\ \mathcal{B}(:,:,2)\\ \mathcal{B}(:,:,3)\end{bmatrix}\right)\]
is an \(n_{1}\times l\times 3\) tensor.
Using the knowledge of Fourier transform, the block-circulant matrix can be transformed into
a block-diagonal matrix. Suppose that \(F_{n_{3}}\) is the \(n_{3}\times n_{3}\) Fourier matrix. Then
\[(F_{n_{3}}\otimes I_{n_{1}})\cdot\operatorname{circ}\left(\operatorname{unfold} \left(\mathcal{A}\right)\right)\cdot\left(F_{n_{3}}^{H}\otimes I_{n_{2}}\right)= \begin{bmatrix}\bar{A}_{1}&&&&\\ &\bar{A}_{2}&&\\ &&\ddots&\\ &&&\bar{A}_{n_{3}}\end{bmatrix}.\]
Let
\[\bar{A}:=\begin{bmatrix}\bar{A}_{1}&&&\\ &\bar{A}_{2}&&\\ &&\ddots&\\ &&&\bar{A}_{n_{3}}\end{bmatrix}.\]
Then \(\bar{A}\) is a complex \(n_{1}n_{3}\times n_{2}n_{3}\) block-diagonal matrix. Let \(\bar{A}_{i}\) be the \(i\)-th frontal slice of \(\hat{\mathcal{A}}\), where \(\hat{\mathcal{A}}\in\mathbb{C}^{n_{1}\times n_{2}\times n_{3}}\) is the result of discrete Fourier transform (DFT) on \(\mathcal{A}\) along the third dimension. The MATLAB command for computing \(\hat{\mathcal{A}}\) is
\[\hat{\mathcal{A}}=\operatorname{fft}[\mathcal{A},[],3]. \tag{2.1}\]
The t-product between two tensors can be understood as the matrix multiplication in the Fourier domain [21]. The t-product result between \(\mathcal{A}\) and \(\mathcal{B}\) can be obtained by multiplying each pair of the frontal slices of \(\hat{\mathcal{A}}\) and \(\hat{\mathcal{B}}\) and computing the inverse Fourier transform along the third dimension, where \(\hat{\mathcal{B}}\) is the result of DFT on \(\mathcal{B}\) along the third dimension, the MATLAB commands is \(\hat{\mathcal{B}}\)=fft[\(\mathcal{B}\), \([]\), \(3\)].
Next, we will introduce some basic knowledge of tensors, which can be found in [13, 21].
**Definition 2.5**.: _(F-diagonal Tensor) Suppose \(\mathcal{A}\in\mathbb{C}^{n_{1}\times n_{2}\times n_{3}}\) is a third-order tensor, then \(\mathcal{A}\) is the called f-diagonal tensor if its each frontal slice is a diagonal matrix._
**Definition 2.6**.: _(Identity Tensor) The \(n\times n\times k\) tensor \(\mathcal{I}_{nnk}\) is called identity tensor if its first frontal slice is an \(n\times n\) identity matrix and other frontal slices are all zeros. If k=1, then the \(n\times n\times 1\) tensor \(\mathcal{I}_{nn1}\) is a special identity tensor with the frontal face being the \(n\times n\) identity matrix, and we use \(\mathcal{I}_{n}\) to denote it._
**Definition 2.7**.: _(Tensor Transpose) If \(\mathcal{A}\in\mathbb{C}^{n_{1}\times n_{2}\times n_{3}}\) is a third-order tensor, then we use \(\mathcal{A}^{T}\) to denote the transpose of tensor \(\mathcal{A}\), which is an \(n_{2}\times n_{1}\times n_{3}\) tensor obtained by transposing each of the frontal slice and then reversing the order of transposed faces \(2\) through \(n_{3}\)._
**Definition 2.8**.: _(Tensor Conjugate Transpose) If \(\mathcal{A}\) is a complex \(n_{1}\times n_{2}\times n_{3}\) tensor, then \(\mathcal{A}^{H}\) is used to denote the conjugate transpose of tensor \(\mathcal{A}\), which is an \(n_{2}\times n_{1}\times n_{3}\) tensor obtained by conjugate transposing each of the frontal slice and then reversing the order of transposed faces \(2\) through \(n_{3}\)._
**Example 2.3**.: Suppose \(\mathcal{A}\) is a complex \(n_{1}\times n_{2}\times 4\) tensor, and \(\mathcal{A}_{1}\), \(\mathcal{A}_{2}\), \(\mathcal{A}_{3}\), \(\mathcal{A}_{4}\) denote its four frontal slices. Then
\[\mathcal{A}^{H}=\operatorname{fold}\left(\begin{bmatrix}\mathcal{A}_{1}^{H} \\ \mathcal{A}_{4}^{H}\\ \mathcal{A}_{3}^{H}\\ \mathcal{A}_{2}^{H}\end{bmatrix}\right).\]
**Definition 2.9**.: _(Unitary Tensor) An \(n\times n\times k\) complex tensor \(\mathcal{U}\) is unitary if_
\[\mathcal{U}*\mathcal{U}^{H}=\mathcal{U}^{H}*\mathcal{U}=\mathcal{I}_{nnk}.\]
Based on these basic definitions, the T-SVD is presented as a third-order generalization of SVD. This generalization can write a third-order tensor as products of three other third-order tensors.
**Theorem 2.1**.: _(T-SVD [21]) Suppose \(\mathcal{A}\in\mathbb{C}^{n_{1}\times n_{2}\times n_{3}}\) is a third-order tensor, then \(\mathcal{A}\) can be factored as_
\[\mathcal{A}=\mathcal{U}*\mathcal{S}*\mathcal{V}^{H},\]
_where \(\mathcal{U}\), \(\mathcal{V}\) are \(n_{1}\times n_{1}\times n_{3}\) and \(n_{2}\times n_{2}\times n_{3}\) unitary tensors, respectively, and \(\mathcal{S}\) is an \(n_{1}\times n_{2}\times n_{3}\) f-diagonal tensor._
**Note \(\mathbf{1}:\)** Suppose \(\hat{\mathcal{A}}\) is given by (2.1), then T-SVD of \(\mathcal{A}\in\mathbb{C}^{n_{1}\times n_{2}\times n_{3}}\) can be obtained by using SVD on each frontal slice of \(\hat{\mathcal{A}}\).
## 3 A New Semi-tensor Product Definition of Tensors
In Section 2, we have introduced some properties and related knowledge of tensors. In the next, we will present a new definition of semi-tensor multiplication of third-order tensors and discuss the desirable theoretical property of this new multiplication.
We present the definition of Frobenius norm of tensors.
**Definition 3.1**.: _(Frobenius Norm of Tensors [16]) Suppose \(\mathcal{A}=(a_{ijk})\) is an \(n_{1}\times n_{2}\times n_{3}\) third-order tensor, then the Frobenius norm of \(\mathcal{A}\) is_
\[\|\mathcal{A}\|_{F}=\sqrt{\sum_{i=1}^{n_{1}}\sum_{j=1}^{n_{2}}\sum_{k=1}^{n_{3 }}|a_{ijk}|^{2}}.\]
Before we define the semi-tensor product of tensors, we need the knowledge of Kronecker product of tensors which is similar to the one of matrices.
**Definition 3.2**.: _(Kronecker Product of Tensors [1, 18]) Suppose \(\mathcal{A}=\left(a_{i_{1}i_{2}\cdots i_{p}}\right)\) is an \(n_{1}\times n_{2}\times\cdots\times n_{p}\)\(p\)-th order tensor, and \(\mathcal{B}=\left(b_{j_{1}j_{2}\cdots j_{p}}\right)\) is an \(m_{1}\times m_{2}\times\cdots\times m_{p}\)\(p\)-th order tensor. Then the Kronecker product of \(\mathcal{A}\) and \(\mathcal{B}\) is defined by_
\[\mathcal{A}\otimes\mathcal{B}=\left[a_{i_{1}i_{2}\cdots i_{p}}\right]\mathcal{ B},\]
_which is an \(n_{1}m_{1}\times n_{2}m_{2}\times\cdots\times n_{p}m_{p}\)\(p\)-th order tensor, and whose each entry of it can be represented as_
\[\mathcal{A}\otimes\mathcal{B}_{[i_{1}j_{1}][i_{2}j_{2}]\cdots[i_{p}j_{p}]}=a_{ i_{1}i_{2}\cdots i_{p}}b_{j_{1}j_{2}\cdots j_{p}}.\]
We briefly list some properties of the tensor Kronecker product.
**Lemma 3.1**.: _([1]) Suppose \(\mathcal{A}\), \(\mathcal{B}\), \(\mathcal{C}\) are \(p\)-th order tensors and \(\alpha\) is a scalar, then \((i)\)\(\mathcal{A}\otimes(\mathcal{B}\pm\mathcal{C})=(\mathcal{A}\otimes\mathcal{B}) \pm(\mathcal{A}\otimes\mathcal{C})\), \((\mathcal{B}\pm\mathcal{C})\otimes\mathcal{A}=\mathcal{B}\otimes\mathcal{A} \pm\mathcal{C}\otimes A;\)\((ii)\)\((\mathcal{A}\otimes\mathcal{B})\otimes\mathcal{C}=\mathcal{A}\otimes(\mathcal{B} \otimes\mathcal{C});\)\((iii)\)\((\alpha\mathcal{A})\otimes\mathcal{B}=\mathcal{A}\otimes(\alpha\mathcal{B})=\alpha\left( \mathcal{A}\otimes\mathcal{B}\right).\)_
After giving the relevant knowledge above, we give a new definition of semi-tensor product of tensors, which requires that tensors have matching dimensions. We mainly give the definition of third-order tensors in this paper.
**Definition 3.3**.: _(Semi-tensor Product of Tensor) Let \(\mathcal{I}_{k}\) be the \(k\times k\times 1\) identity tensor defined in Definition 2.6, \(\mathcal{A}=(a_{i_{1}i_{2}i_{3}})\) be an \(m\times n\times t\) third-order tensor, and \(\mathcal{B}=(b_{j_{1}j_{2}j_{3}})\) be a \(p\times q\times t\) third-order tensor. We have \((i)\) If \(n=kp,k\in\mathbb{Z}^{+}\), then_
\[\mathcal{A}\ltimes\mathcal{B} =\mathcal{A}*(\mathcal{B}\otimes\mathcal{I}_{k})\] \[=\operatorname{fold}\left[\operatorname{circ}\left(\operatorname {unfold}(\mathcal{A})\right)\cdot\operatorname{unfold}\left(\mathcal{B} \otimes\mathcal{I}_{k}\right)\right]\]
_is an \(m\times kq\times t\) third-order tensor; \((ii)\) If \(n=\frac{1}{k}p,k\in\mathbb{Z}^{+}\), then_
\[\mathcal{A}\ltimes\mathcal{B} =(\mathcal{A}\otimes\mathcal{I}_{k})*\mathcal{B}\] \[=\operatorname{fold}\left[\operatorname{circ}\left(\operatorname {unfold}\left(\mathcal{A}\otimes\mathcal{I}_{k}\right)\right)\cdot \operatorname{unfold}\left(\mathcal{B}\right)\right]\]
_is a \(km\times q\times t\) third-order tensor._
According to Definition 2.4, we can see that if \(n=p\), then the semi-tensor product of tensors is actually the t-product. For this definition of tensor semi-tensor product, we can get another equivalent representation as follows.
**Lemma 3.2**.: _Suppose \(\mathcal{A}=(a_{i_{1}i_{2}i_{3}})\) is an \(m\times n\times t\) third-order tensor and \(\mathcal{B}=(b_{j_{1}j_{2}j_{3}})\) is a \(p\times q\times t\) third-order tensor. \((i)\) If \(n=kp,k\in\mathbb{Z}^{+}\), then_
\[\mathcal{A}\ltimes\mathcal{B} =\operatorname{fold}\left[\operatorname{circ}\left(\operatorname {unfold}(\mathcal{A})\right)\cdot\operatorname{unfold}\left(\mathcal{B} \otimes\mathcal{I}_{k}\right)\right]\] \[=\operatorname{fold}\left[\operatorname{circ}\left(\operatorname {unfold}(\mathcal{A})\right)\ltimes\operatorname{unfold}\left(\mathcal{B} \right)\right]\]
_is an \(m\times kq\times t\) third-order tensor; \((ii)\) If \(n=\frac{1}{k}p,k\in\mathbb{Z}^{+}\), then_
\[\mathcal{A}\ltimes\mathcal{B} =\operatorname{fold}\left[\operatorname{circ}\left(\operatorname {unfold}\left(\mathcal{A}\otimes\mathcal{I}_{k}\right)\right)\cdot \operatorname{unfold}\left(\mathcal{B}\right)\right]\] \[=\operatorname{fold}\left[\operatorname{circ}\left(\operatorname {unfold}\left(\mathcal{A}\right)\right)\ltimes\operatorname{unfold}\left( \mathcal{B}\right)\right]\]
_is a \(km\times q\times t\) third-order tensor._
Proof.: From Lemma 2.2, Definitions 3.2 and 3.3, and definitions of fold and unfold operators, we can prove that
\((i)\)
\[\mathcal{A}\ltimes\mathcal{B} =\operatorname{fold}\left[\operatorname{circ}\left(\operatorname {unfold}(\mathcal{A})\right)\cdot\operatorname{unfold}\left(\mathcal{B} \otimes\mathcal{I}_{k}\right)\right]\] \[=\operatorname{fold}\left[\operatorname{circ}\left(\operatorname {unfold}(\mathcal{A})\right)\cdot\left(\operatorname{unfold}\left(\mathcal{B} \right)\otimes I_{k}\right)\right]\] \[=\operatorname{fold}\left[\operatorname{circ}\left(\operatorname {unfold}(\mathcal{A})\right)\ltimes\operatorname{unfold}\left(\mathcal{B} \right)\right],\]
and
\[\mathcal{A}\ltimes\mathcal{B} =\operatorname{fold}\left[\operatorname{circ}\left(\operatorname {unfold}\left(\mathcal{A}\otimes\mathcal{I}_{k}\right)\right)\cdot \operatorname{unfold}\left(\mathcal{B}\right)\right]\] \[=\operatorname{fold}\left[\operatorname{circ}\left((\operatorname {unfold}\left(\mathcal{A}\right)\right)\otimes I_{k}\right)\cdot \operatorname{unfold}\left(\mathcal{B}\right)\right]\] \[=\operatorname{fold}\left[\operatorname{circ}\left(\operatorname {unfold}\left(\mathcal{A}\right)\right)\ltimes\operatorname{unfold}\left( \mathcal{B}\right)\right],\]
where \(I_{k}\) is the \(k\times k\) identity matrix and \(\mathcal{I}_{k}\) is the \(k\times k\times 1\) identity tensor.
Next, we give a basic property of semi-tensor product of tensors.
**Theorem 3.1**.: _Let \(\mathcal{A}\), \(\mathcal{B}\), and \(\mathcal{C}\) are three third-order tensors, they have proper dimensions such that the semi-tensor product is well-defined, then we have_
\[(\mathcal{A}\ltimes\mathcal{B})\ltimes\mathcal{C}=\mathcal{A}\ltimes(\mathcal{ B}\ltimes\mathcal{C})\,.\]
Proof.: From Lemma 3.2, we have
\[\mathcal{A}\ltimes\mathcal{B} =\operatorname{fold}\left[\operatorname{circ}\left(\operatorname{ unfold}(\mathcal{A})\right)\ltimes\operatorname{unfold}\left(\mathcal{B} \right)\right],\] \[\mathcal{B}\ltimes\mathcal{C} =\operatorname{fold}\left[\operatorname{circ}\left(\operatorname{ unfold}(\mathcal{B})\right)\ltimes\operatorname{unfold}\left(\mathcal{C} \right)\right].\]
Then,
\[(\mathcal{A}\ltimes\mathcal{B})\ltimes\mathcal{C}\] \[= \operatorname{fold}[\operatorname{circ}[\operatorname{unfold}[ \operatorname{fold}(\operatorname{circ}(\operatorname{unfold}(\mathcal{A})) \ltimes\operatorname{unfold}(\mathcal{B}))]]\ltimes\operatorname{unfold}( \mathcal{C})]\] \[= \operatorname{fold}[\operatorname{circ}[\operatorname{circ}( \operatorname{unfold}(\mathcal{A}))\ltimes\operatorname{unfold}(\mathcal{B})] \ltimes\operatorname{unfold}(\mathcal{C})]\]
and
\[\mathcal{A}\ltimes(\mathcal{B}\ltimes\mathcal{C})\] \[= \operatorname{fold}[\operatorname{circ}(\operatorname{unfold}( \mathcal{A}))\ltimes\operatorname{unfold}[\operatorname{fold}(\operatorname{ circ}(\operatorname{unfold}(\mathcal{B}))\ltimes\operatorname{unfold}( \mathcal{C})]]\] \[= \operatorname{fold}[\operatorname{circ}(\operatorname{unfold}( \mathcal{A}))\ltimes(\operatorname{circ}(\operatorname{unfold}(\mathcal{B})) \ltimes\operatorname{unfold}(\mathcal{C})].\]
Thus, we only need to prove
\[\operatorname{circ}[\operatorname{circ}(\operatorname{unfold}( \mathcal{A}))\ltimes\operatorname{unfold}(\mathcal{B})]\ltimes\operatorname{ unfold}(\mathcal{C}) \tag{3.1}\] \[=\operatorname{circ}(\operatorname{unfold}(\mathcal{A}))\ltimes( \operatorname{circ}(\operatorname{unfold}(\mathcal{B}))\ltimes\operatorname{ unfold}(\mathcal{C}).\]
The left side of (3.1) is equivalent to
\[(\operatorname{circ}(\operatorname{unfold}(\mathcal{A}))\ltimes\operatorname{ circ}(\operatorname{unfold}(\mathcal{B})))\ltimes\operatorname{unfold}(\mathcal{C}). \tag{3.2}\]
By applying Lemma 2.3, we can see that (3.2) can be written as
\[(\operatorname{circ}(\operatorname{unfold}(\mathcal{A}))\ltimes \operatorname{circ}(\operatorname{unfold}(\mathcal{B})))\ltimes\operatorname{ unfold}(\mathcal{C})\] \[=\operatorname{circ}(\operatorname{unfold}(\mathcal{A}))\ltimes( \operatorname{circ}(\operatorname{unfold}(\mathcal{B}))\ltimes\operatorname{ unfold}(\mathcal{C}).\]
Hence the theorem is proved.
From Lemma 3.2, we know that the semi-tensor product between two tensors can be understood as the matrix multiplication in the Fourier domain [21].
Suppose \(\mathcal{A}\) is an \(m\times n\times t\) third-order tensor and \(\mathcal{B}\) is a \(p\times q\times t\) third-order tensor. If \(n=kp,k\in\mathbb{Z}^{+}\), then \(\mathcal{C}=\mathcal{A}\ltimes\mathcal{B}\) is equivalent to
\[\operatorname{unfold}\left(\mathcal{C}\right)=\operatorname{circ}\left( \operatorname{unfold}\left(\mathcal{A}\right)\right)\ltimes\operatorname{ unfold}\left(\mathcal{B}\right),\]
and we have
\[\operatorname{unfold}(\hat{\mathcal{C}})= F_{t}\ltimes\operatorname{unfold}\left(\mathcal{C}\right)\] \[= F_{t}\ltimes\operatorname{circ}\left(\operatorname{unfold}\left( \mathcal{A}\right)\right)\ltimes\operatorname{unfold}\left(\mathcal{B}\right)\] \[= F_{t}\ltimes\operatorname{circ}\left(\operatorname{unfold} \left(\mathcal{A}\right)\right)\ltimes F_{t}^{H}\ltimes F_{t}\ltimes \operatorname{unfold}\left(\mathcal{B}\right)\] \[= \bar{A}\ltimes\operatorname{unfold}(\hat{\mathcal{B}}).\]
Therefore, the semi-tensor product result between \(\mathcal{A}\) and \(\mathcal{B}\) can be obtained by multiplying each pair of the frontal slices of \(\hat{\mathcal{A}}\) and \(\hat{\mathcal{B}}\) by semi-tensor product and computing the inverse Fourier transform along the third dimension,where \(\hat{\mathcal{B}}\) and \(\hat{\mathcal{C}}\) are the results of DFT on \(\mathcal{B}\) and \(\mathcal{C}\) along the third dimension, the MATLAB commands are \(\hat{\mathcal{B}}\)=fft[\(\mathcal{B}\), [ ], 3] and \(\hat{\mathcal{C}}\)=fft[\(\mathcal{C}\), [ ], 3].
We note that Definitions 2.5, 2.6, 2.7, 2.8, and 2.9 in Section 2 still hold under this new multiplication.
## 4 Tensor Decomposition Based on Semi-tensor Product
In this section, we will give a new approximate tensor decomposition model based on the semi-tensor product.
### Singular Value Decomposition of Matrices based on Semi-tensor Product
We first introduce a strategy of finding \(B\) and \(C\) so that \(\|A-B\otimes C\|_{F}\) is minimized [20]. Let
\[A=\begin{bmatrix}A_{1,1}&\cdots&A_{1,n_{1}}\\ A_{2,1}&\cdots&A_{2,n_{1}}\\ \vdots&\ddots&\vdots\\ A_{m_{1},1}&\cdots&A_{m_{1},n_{1}}\end{bmatrix}\in\mathbb{C}^{m_{1}m_{2}\times n _{1}n_{2}},\]
where each \(A_{i,j}(i=1,2,\cdots,m_{1},j=1,2,\cdots,n_{1})\) is an \(m_{2}\times n_{2}\) matrix. Let \(\widetilde{A}\in\mathbb{C}^{m_{1}n_{1}\times m_{2}n_{2}}\) be defined by
\[\widetilde{A}:=\begin{bmatrix}\tilde{A}_{1}&\tilde{A}_{2}&\cdots&\tilde{A}_{ n_{1}}\end{bmatrix}^{T} \tag{4.1}\]
with
\[\tilde{A}_{j}=\begin{bmatrix}vec(A_{1,j})^{T}\\ vec(A_{2,j})^{T}\\ \vdots\\ vec(A_{m_{1},j})^{T}\end{bmatrix}. \tag{4.2}\]
**Lemma 4.1**.: _([20]) Suppose \(A\in\mathbb{C}^{m\times n}\) with \(m=m_{1}m_{2}\) and \(n=n_{1}n_{2}\), then we can find \(B\in\mathbb{C}^{m_{1}\times n_{1}}\) and \(C\in\mathbb{C}^{m_{2}\times n_{2}}\) defined by \(vec(B)=\sqrt{\widetilde{\sigma}_{1}}U\left(:,1\right)\) and \(vec(C)=\sqrt{\widetilde{\sigma}_{1}}V\left(:,1\right)\) minimize_
\[\|A-B\otimes C\|_{F},\]
_where \(\widetilde{\sigma}_{1}\) is the largest singular value of \(\widetilde{A}\) which is defined by (4.1), and \(U\left(:,1\right)\), \(V\left(:,1\right)\) are its corresponding left and right singular vectors, respectively._
By using Lemma 4.1, we can find a SVD-like approximate decomposition based on semi-tensor product of matrix called STP-SVD.
**Theorem 4.1**.: _Suppose \(A\in\mathbb{C}^{m\times n}\) with \(m=m_{1}m_{2}\) and \(n=n_{1}n_{2}\). Then there exist unitary matrices \(U\in\mathbb{C}^{m_{1}\times m_{1}}\) and \(V\in\mathbb{C}^{n_{1}\times n_{1}}\) such that_
\[A=U\ltimes\Sigma\ltimes V^{H}+E_{1}, \tag{4.3}\]
_where \(\Sigma=\mathrm{blkdiag}(S_{1},S_{2},\ldots,S_{p})\in\mathbb{C}^{m\times n}\) with each block being \(S_{i}\in\mathbb{C}^{m_{2}\times n_{2}}\), \(i=1,2,\ldots,p\), and \(p=min\left\{m_{1},n_{1}\right\}\), then \(\|S_{1}\|_{F}\geqslant\|S_{2}\|_{F}\geqslant\cdots\geqslant\|S_{p}\|_{F}\). The matrix \(E_{1}\) is the approximation error._
Proof.: From Lemma 4.1, we know that for \(A\in\mathbb{C}^{m\times n}\) with \(m=m_{1}m_{2}\) and \(n=n_{1}n_{2}\), we can find \(B\in\mathbb{C}^{m_{1}\times n_{1}}\) and \(C\in\mathbb{C}^{m_{2}\times n_{2}}\) minimize \(\|A-B\otimes C\|_{F}\). In other words, \(A\) can be represented by \(B\) and \(C\) as
\[A=B\otimes C+E_{1}. \tag{4.4}\]
Next, we can obtain \(B=U\Sigma_{B}V^{H}\) by computing the SVD of \(B\), where \(U\in\mathbb{C}^{m_{1}\times m_{1}}\), \(V\in\mathbb{C}^{n_{1}\times n_{1}}\), and \(\Sigma_{B}\in\mathbb{C}^{m_{1}\times n_{1}}\). Then, (4.4) can be rewritten as
\[A=(U\Sigma_{B}V^{H})\otimes C+E_{1}, \tag{4.5}\]
where \(\Sigma_{B}\) is a diagonal matrix, and the diagonal elements of \(\Sigma_{B}\) are singular values of \(B\). We use \(\sigma_{1},\sigma_{2},\ldots,\sigma_{p}\), \(p=min\left\{m_{1},n_{1}\right\}\) to denote singular values of \(B\), and \(\sigma_{1}\geqslant\sigma_{2}\geqslant\cdots\geqslant\sigma_{p}\), then \(\Sigma_{B}=diag(\sigma_{1},\sigma_{2},\ldots,\sigma_{p})\).
By applying Lemmas 2.1 and 2.2, we can write (4.5) as
\[\begin{split} A&=\left(U\Sigma_{B}V^{H}\right) \otimes(I_{m_{2}}CI_{n_{2}})+E_{1}\\ &=(U\otimes I_{m_{2}})\left(\Sigma_{B}\otimes C\right)\left(V^{H }\otimes I_{n_{2}}\right)+E_{1}\\ &=U\ltimes\Sigma\ltimes V^{H}+E_{1},\end{split}\]
where \(\Sigma=\Sigma_{B}\otimes C\) is an \(m_{1}m_{2}\times n_{1}n_{2}\) block-diagonal matrix. And the diagonal elements of \(\Sigma\) are \(S_{1}=\sigma_{1}C,S_{2}=\sigma_{2}C,\ldots,S_{p}=\sigma_{p}C\).
Since \(\sigma_{i}(i=1,2,\ldots,p)\) is the nonnegative number, then using the knowledge of norm, we have \(\|\sigma_{i}C\|_{F}=\sigma_{i}\|C\|_{F}\), thus
\[\|\sigma_{1}C\|_{F}\geqslant\|\sigma_{2}C\|_{F}\geqslant\cdots\geqslant\| \sigma_{p}C\|_{F},\]
which is
\[\|S_{1}\|_{F}\geqslant\|S_{2}\|_{F}\geqslant\cdots\geqslant\|S_{p}\|_{F}.\]
\(E_{1}\) is the error matrix and \(\|E_{1}\|_{F}\) is the upper bound for approximation error \(E_{1}\), then the proof is complete.
We can give a specific error analysis for the error matrix \(E_{1}\) in Theorem 4.1. Suppose \(\tilde{\sigma}_{1},\tilde{\sigma}_{2},\ldots,\tilde{\sigma}_{q}\), \(q=min\left\{m_{1}n_{1},m_{2}n_{2}\right\}\) are singular values of \(\widetilde{A}\in\mathbb{C}^{m_{1}n_{1}\times m_{2}n_{2}}\) in (4.1), then
\[\begin{split}\|E_{1}\|_{F}&=\sqrt{\tilde{\sigma}_{2} ^{2}+\tilde{\sigma}_{3}^{2}+\cdots+\tilde{\sigma}_{q}^{2}}\\ &=\sqrt{\sum_{i=2}^{q}\tilde{\sigma}_{i}^{2}}.\end{split} \tag{4.6}\]
We give the corresponding MATLAB preudocode in the following.
```
0:\(A\in\mathbb{C}^{m\times n}\), \(m_{2},n_{2}\). (\(m=m_{1}m_{2},n=n_{1}n_{2}\))
0:\(U,\Sigma,V\).
1:By using Lemma 4.1, compute \(B\in\mathbb{C}^{m_{1}\times n_{1}}\) and \(C\in\mathbb{C}^{m_{2}\times n_{2}}\), such that \(A\approx B\otimes C\);
2:Compute the SVD of \(B\), i.e. \([U,\Sigma_{B},V]\)=svd\((B)\), where \(U\in\mathbb{C}^{m_{1}\times m_{1}}\), \(V\in\mathbb{C}^{n_{1}\times n_{1}}\), and \(\Sigma_{B}\in\mathbb{C}^{m_{1}\times n_{1}}\);
3:\(\Sigma=\Sigma_{B}\otimes C\in\mathbb{C}^{m_{1}m_{2}\times n_{1}n_{2}}\).
```
**Algorithm 1** STP-SVD of Matrices
**Input:**\(A\in\mathbb{C}^{m\times n}\), \(m_{2},n_{2}\). (\(m=m_{1}m_{2},n=n_{1}n_{2}\))
**Output:**\(U,\Sigma,V\).
1:By using Lemma 4.1, compute \(B\in\mathbb{C}^{m_{1}\times n_{1}}\) and \(C\in\mathbb{C}^{m_{2}\times n_{2}}\), such that \(A\approx B\otimes C\);
2:Compute the SVD of \(B\), i.e. \([U,\Sigma_{B},V]\)=svd\((B)\), where \(U\in\mathbb{C}^{m_{1}\times m_{1}}\), \(V\in\mathbb{C}^{n_{1}\times n_{1}}\), and \(\Sigma_{B}\in\mathbb{C}^{m_{1}\times n_{1}}\);
3:\(\Sigma=\Sigma_{B}\otimes C\in\mathbb{C}^{m_{1}m_{2}\times n_{1}n_{2}}\).
If we use the truncated SVD for \(B\) in Algorithm 1, then we can obtain a corresponding truncated STP-SVD algorithm of matrices. The MATLAB preudocode is given as follows.
```
0:\(A\in\mathbb{C}^{m\times n}\), \(m_{2},n_{2}\), r. \((m=m_{1}m_{2},n=n_{1}n_{2})\)
0:\(U,\Sigma,V\).
1: Compute \(B\in\mathbb{C}^{m_{1}\times n_{1}}\) and \(C\in\mathbb{C}^{m_{2}\times n_{2}}\), such that \(A\approx B\otimes C\);
2: Compute the truncated SVD of \(B\), i.e. \([U,\Sigma_{B},V]\)=svds(\(B\), r), where \(U\in\mathbb{C}^{m_{1}\times r}\), \(V\in\mathbb{C}^{r\times n_{1}}\), and \(\Sigma_{B}\in\mathbb{C}^{r\times r}\);
3:\(\Sigma=\Sigma_{B}\otimes C\in\mathbb{C}^{rm_{2}\times rn_{2}}\).
```
**Algorithm 2** Truncated STP-SVD of Matrices
We write the truncated STP-SVD of matrices in a form similar to Theorem 4.1, i.e.
\[A=U\ltimes\Sigma\ltimes V^{H}+E. \tag{4.7}\]
From the proof of Theorem 4.1, we can give a new error upper bound for truncated STP-SVD of the matrix. The error matrix \(E\) in (4.7) can be separated into two parts, the first part is denoted as \(E_{1}\) which produced by using rank-one SVD decomposition for \(\widetilde{A}\), and the other part \(E_{2}\) comes from the truncated SVD of \(B\).
We have known the error upper bound for \(E_{1}\) from (4.6). Next, we give an error upper bound for \(E_{2}\) according to the proof of Theorem 4.1. Suppose the truncation at \(r\) when calculating SVD on \(B\in\mathbb{C}^{m_{1}\times n_{1}}\), then the error matrix is
\[E_{2}=U\ltimes\Sigma\ltimes V^{H}-U\ltimes\Sigma_{r}\ltimes V^{H},\]
where \(\Sigma_{r}\) is obtained by preserving the first \(r\) blocks of \(\Sigma\) and changing the rest to \(0\), which can also be represented by blkdiag\((S_{1},\cdots,S_{r},0,\cdots,0)\). We use \(\Sigma_{p}\) to denote \(\Sigma-\Sigma_{r}\), and \(\Sigma_{p}\) therefore can be represented by blkdiag\((0,\cdots,0,S_{r+1},\cdots,S_{p})\), \(p=\min\left\{m_{1},n_{1}\right\}\). Then we can obtain the upper bound for \(E_{2}\):
\[\begin{split} E_{2}&=\|U\ltimes\Sigma\ltimes V^{H}- U\ltimes\Sigma_{r}\ltimes V^{H}\|_{F}\\ &=\|U\ltimes\Sigma_{p}\ltimes V^{H}\|_{F}\\ &=\|\text{blkdiag}(0,\cdots,0,S_{r+1},\cdots,S_{p})\|_{F}\\ &=\sqrt{\sum_{j=r+1}^{p}\left\|\text{S}_{j}\right\|_{F}^{2}}. \end{split} \tag{4.8}\]
Since the error matrix \(E\) in (4.7) can be divided into two parts \(E_{1}\) and \(E_{2}\). Then we can give the following upper bound for approximation \(E\) from (4.6) and (4.8).
\[\|E\|_{F}\leq\sqrt{\sum_{i=2}^{q}\tilde{\sigma}_{i}^{2}}+\sqrt{\sum_{j=r+1}^{ p}\left\|\text{S}_{j}\right\|_{F}^{2}}, \tag{4.9}\]
we know that \(\tilde{\sigma}_{1},\tilde{\sigma}_{2},\ldots,\tilde{\sigma}_{q}\), \(q=\min\left\{m_{1}n_{1},m_{2}n_{2}\right\}\) are singular values of \(\widetilde{A}\in\mathbb{C}^{m_{1}n_{1}\times m_{2}n_{2}}\) from the previous assumption.
### A New Tensor Decomposition Strategy based on Semi-tensor Product
From Theorem 4.1, we can learn that any matrix in the complex domain has a decomposition form based on the semi-tensor product. Besides, after giving the definition of tensor semi-tensor product, we can find that a tensor also can be decomposed based on semi-tensor product of tensors.
**Theorem 4.2**.: _Suppose \(\mathcal{A}\in\mathbb{C}^{m\times n\times l}\) with \(m=m_{1}m_{2}\) and \(n=n_{1}n_{2}\) is a third-order tensor, then it can be factorized as_
\[\mathcal{A}=\mathcal{U}\ltimes\mathcal{S}\ltimes\mathcal{V}^{H}+\mathcal{E}_{1}. \tag{4.10}\]
_where \(\mathcal{U}\), \(\mathcal{V}\) are \(m_{1}\times m_{1}\times l\) and \(n_{1}\times n_{1}\times l\) unitary tensors, respectively, each frontal slice of \(\mathcal{S}\) is a block-diagonal matrix, and \(\mathcal{E}_{1}\) is an error tensor._
Proof.: By using the Fourier tranform, we suppose \(F_{l}\) is an \(l\times l\) Fourier matrix, then we have
\[(F_{l}\otimes I_{m})\operatorname{circ}\left(\text{unfold}\left(\mathcal{A} \right)\right)\left(F_{l}^{H}\otimes I_{n}\right)=\begin{bmatrix}\bar{A}_{1} \\ &\bar{A}_{2}\\ &&\ddots\\ &&&\bar{A}_{l}\end{bmatrix}, \tag{4.11}\]
For each \(\bar{A}_{i}(i=1,2,\cdots,l)\), we have \(\bar{A}_{i}=\bar{U}_{i}\ltimes\bar{\Sigma}_{i}\ltimes\bar{V}_{i}^{H}+\bar{E}_{i}\) by Theorem 4.1, the right side of (4.11) can be written as
\[\begin{bmatrix}\bar{A}_{1}\\ &\bar{A}_{2}\\ &&\ddots\\ &&&\bar{A}_{l}\end{bmatrix}\] \[=\begin{bmatrix}\bar{U}_{1}\\ &\bar{U}_{2}\\ &&\ddots\\ &&&\bar{U}_{l}\end{bmatrix}\ltimes\begin{bmatrix}\bar{\Sigma}_{1}\\ &\bar{\Sigma}_{2}\\ &&\ddots\\ &&&\bar{\Sigma}_{l}\end{bmatrix}\ltimes\begin{bmatrix}\bar{V}_{1}^{H}\\ &\bar{V}_{2}^{H}\\ &&\ddots\\ &&&\bar{V}_{l}^{H}\end{bmatrix}+\begin{bmatrix}\bar{E}_{1}\\ &\bar{E}_{2}\\ &&\ddots\\ &&&\bar{E}_{l}\end{bmatrix}.\]
Next, we use semi-tensor product multiply the left and right sides of above formula by \(F_{l}^{H}\)
and \(F_{l}\), respectively, then we have
\[F_{l}^{H}\ltimes\begin{bmatrix}\bar{A}_{1}&&\\ &\bar{A}_{2}&&\\ &&\ddots&\\ &&&\bar{A}_{l}\end{bmatrix}\ltimes F_{l}\] \[= F_{l}^{H}\ltimes\begin{bmatrix}\bar{U}_{1}&&\\ &\bar{U}_{2}&&\\ &&\ddots&\\ &&&\bar{U}_{l}\end{bmatrix}\ltimes\begin{bmatrix}\bar{\Sigma}_{1}&&\\ &\bar{\Sigma}_{2}&&\\ &&\ddots&\\ &&&\bar{\Sigma}_{l}\end{bmatrix}\ltimes\begin{bmatrix}\bar{V}_{1}^{H}&&\\ &\bar{V}_{2}^{H}&&\\ &&\ddots&\\ &&&\bar{V}_{l}^{H}\end{bmatrix}\ltimes F_{l}\] \[+F_{l}^{H}\ltimes\begin{bmatrix}\bar{E}_{1}&&\\ &\bar{E}_{2}&&\\ &&\ddots&\\ &&&\bar{E}_{l}\end{bmatrix}\ltimes F_{l}\] \[= F_{l}^{H}\ltimes\begin{bmatrix}\bar{U}_{1}&&\\ &\bar{U}_{2}&&\\ &&\ddots&\\ &&&\bar{V}_{l}^{H}\end{bmatrix}\ltimes F_{l}\ltimes F_{l}^{H}\ltimes\begin{bmatrix} \bar{\Sigma}_{1}&&\\ &\bar{\Sigma}_{2}&&\\ &&\ddots&\\ &&&\bar{\Sigma}_{l}\end{bmatrix}\ltimes F_{l}\ltimes F_{l}^{H}\] \[\ltimes\begin{bmatrix}\bar{V}_{1}^{H}&&\\ &\bar{V}_{2}^{H}&&\\ &&\ddots&\\ &&&\bar{V}_{l}^{H}\end{bmatrix}\ltimes F_{l}+F_{l}^{H}\ltimes\begin{bmatrix} \bar{E}_{1}&&\\ &\bar{E}_{2}&&\\ &&\ddots&\\ &&&\bar{\Sigma}_{l}\end{bmatrix}\ltimes F_{l},\]
where
\[U:=F_{l}^{H}\ltimes\begin{bmatrix}\bar{U}_{1}&&\\ &\bar{U}_{2}&&\\ &&\ddots&\\ &&&\bar{U}_{l}\end{bmatrix}\ltimes F_{l},\] \[\Sigma:=F_{l}^{H}\ltimes\begin{bmatrix}\bar{\Sigma}_{1}&&\\ &\bar{\Sigma}_{2}&&\\ &&\ddots&\\ &&&\bar{\Sigma}_{l}\end{bmatrix}\ltimes F_{l},\] \[V^{H}:=F_{l}^{H}\ltimes\begin{bmatrix}\bar{V}_{1}^{H}&&\\ &\bar{V}_{2}^{H}&&\\ &&\ddots&\\ &&&\bar{V}_{l}^{H}\end{bmatrix}\ltimes F_{l},\]
and
\[E_{1}:=F_{l}^{H}\ltimes\begin{bmatrix}\bar{E}_{1}^{H}&&\\ &\bar{E}_{2}^{H}&&\\ &&\ddots&\\ &&&\bar{E}_{l}^{H}\end{bmatrix}\ltimes F_{l}\]
are block-circulant matrices. Let \(\mathcal{U}=\text{fold}(\text{circ}^{-1}(U))\), \(\mathcal{S}=\text{fold}(\text{circ}^{-1}(\Sigma))\), \(\mathcal{V}^{H}=\text{fold}(\text{circ}^{-1}(V^{H}))\), and \(\mathcal{E}_{1}=\text{fold}(\text{circ}^{-1}(E_{1}))\), then we can obtain an approximate tensor decomposition for \(\mathcal{A}\) of the form (4.10).
Since each \(\bar{U}_{i}\) is unitary, we know that \(U\) is also a unitary matrix and \(\mathcal{U}=\text{fold}(\text{circ}^{-1}(U))\). From Definition 2.8, we have \(\mathcal{U}^{H}=\text{fold}(\text{circ}^{-1}(U^{H}))\), then
\[\mathcal{U}^{H}\ltimes\mathcal{U}=\text{fold}(U^{H}\ltimes\text{circ}^{-1}(U ))=\text{fold}\left(\begin{bmatrix}I_{m_{1}}\\ 0\\ \vdots\\ 0\end{bmatrix}\right)=\mathcal{I}_{m_{1}m_{1}l},\]
which indicates \(\mathcal{U}\) is a unitary tensor. Similarly, \(\mathcal{V}\) is also unitary.
The tensor \(\mathcal{E}_{1}\) is the approximation error of the decomposition in (4.10). According to the unitary invariance of Frobenius norm, we have
\[\|\mathcal{E}_{1}\|_{F}=\sqrt{\left\|\bar{E}_{1}\right\|_{F}^{2}+\left\|\bar{E }_{2}\right\|_{F}^{2}+\cdots+\left\|\bar{E}_{l}\right\|_{F}^{2}},\]
for the approximation error tensor \(\mathcal{E}_{1}\), then the proof is complete.
**Note 2** : Suppose \(\hat{\mathcal{A}}\) is considered as the tensor obtained by using Fourier transform of \(\mathcal{A}\), then STP-SVD of tensor \(\mathcal{A}\in\mathbb{C}^{n_{1}\times n_{2}\times n_{3}}\) can be obtained by computing matrix STP-SVD on each frontal slice of \(\hat{\mathcal{A}}\).
We give the MATLAB psuedocode in Algorithm 3 for this decomposition strategy.
```
0:\(\mathcal{A}\in\mathbb{C}^{m_{1}m_{2}\times n_{1}n_{2}\times l}\), \(m_{2},n_{2}\).
0:\(\mathcal{U},\mathcal{S},\mathcal{V}\).
1: Perform Fourier transform on \(\mathcal{A}\), i.e.\(\hat{\mathcal{A}}=\text{fft}(\mathcal{A},[],3)\);
2: From Lemma 4.1, compute \(B_{i}\) and \(C_{i}\) and do SVD on \(B_{i}\): for\(i=1:l\)do \(\hat{\mathcal{A}}(:,:,i)\approx B_{i}\otimes C_{i}\); \([U_{i},\Sigma_{B_{i}},V_{i}]=\text{svd}(B_{i})\); \(\Sigma_{i}=\Sigma_{B_{i}}\otimes C_{i}\in\mathbb{C}^{m_{1}m_{2}\times n_{1}n_ {2}}\); \(\mathcal{U}_{i}\gets U_{i}\), \(\mathcal{S}_{i}\leftarrow\Sigma_{i}\), \(\mathcal{V}_{i}\gets V_{i}\); endfor
3:\(\mathcal{U}=\text{ifft}(\mathcal{U},[],3)\), \(\mathcal{S}=\text{ifft}(\mathcal{S},[],3)\), \(\mathcal{V}=\text{ifft}(\mathcal{V},[],3)\). \((\hat{\mathcal{A}}(:,:,i)\), \(\mathcal{U}_{i}\), \(\mathcal{S}_{i}\), and \(\mathcal{V}_{i}\) denote the \(i\)-th frontal slice of \(\hat{\mathcal{A}}\), \(\mathcal{U}\), \(\mathcal{S}\), and \(\mathcal{V}\), respectively.)
```
**Algorithm 3** STP-SVD of Tensors
We have given the truncated STP-SVD of matrices, and now we give the truncated STP-SVD of tensors. The idea is using truncated SVD on \(B_{i}\) every time we decompose \(\hat{\mathcal{A}}(:,:,i)\). Given a vector \(\mathbf{R}=[R_{1},R_{2},\ldots,R_{l}]^{T}\in\mathbb{N}_{+}{}^{l}\), when we do STP-SVD on the \(i\)-th frontal slice of \(\hat{\mathcal{A}}\), the SVD in Algorithm 3 is truncated at \(R_{i}\). For the decomposition \(\mathcal{A}=\mathcal{U}\ltimes\mathcal{S}\ltimes\mathcal{V}^{H}+\mathcal{E}_{1}\), we can get the number of diagonal blocks in the \(i\)-th frontal slice of \(\mathcal{S}\) is \(R_{i}\). From this, we give the vector \(\mathbf{R}\in\mathbb{N}_{+}{}^{l}\) a definition in the following.
**Definition 4.1**.: _Let \(\mathcal{A}\in\mathbb{C}^{m\times n\times l}\) be decomposed into \(\mathcal{A}=\mathcal{U}\ltimes\mathcal{S}\ltimes\mathcal{V}^{H}+\mathcal{E}_{1}\), and \(\mathcal{S}\in\mathbb{C}^{m\times n\times l}\) with each frontal slice a block-diagonal matrix. Then we call \(\mathbf{R}=[R_{1},R_{2},\ldots,R_{l}]^{T}\in\mathbb{N}_{+}{}^{l}\) the block rank, where \(R_{i}\) represents the number of diagonal blocks in the \(i\)-th frontal slice of \(\mathcal{S}\)._
The MATLAB psuedocode of truncated STP-SVD is as follows.
```
0:\(\mathcal{A}\in\mathbb{C}^{m_{1}m_{2}\times n_{1}n_{2}\times l}\), \(m_{2},n_{2}\), \(\mathbf{R}\in\mathbb{N}_{+}^{l}\).
0:\(\mathcal{U},\mathcal{S},\mathcal{V}\).
1: Perform Fourier transform on \(\mathcal{A}\), i.e.\(\hat{\mathcal{A}}=\text{fft}(\mathcal{A},[],3)\);
2: From Lemma 4.1, compute \(B_{i}\) and \(C_{i}\) and do truncated SVD on \(B_{i}\): for\(i=1:l\)do \(\hat{\mathcal{A}}(:,:,i)\approx B_{i}\otimes C_{i}\); \(r=R(i)\); \([U_{i},\Sigma_{B_{i}},V_{i}]=\text{svds}(B_{i},r)\); \(\Sigma_{i}=\Sigma_{B_{i}}\otimes C_{i}\in\mathbb{C}^{rn_{2}\times rn_{2}}\); \(\mathcal{U}_{i}\gets U_{i}\), \(\mathcal{S}_{i}\leftarrow\Sigma_{i}\), \(\mathcal{V}_{i}\gets V_{i}\); endfor
3:\(\mathcal{U}=\text{ifft}(\mathcal{U},[,]3)\), \(\mathcal{S}=\text{ifft}(\mathcal{S},[,]3)\), \(\mathcal{V}=\text{ifft}(\mathcal{V},[,]3)\). \((\hat{\mathcal{A}}(:,:,i)\), \(\mathcal{U}_{i}\), \(\mathcal{S}_{i}\), and \(\mathcal{V}_{i}\) denote the \(i\)-th frontal slice of \(\hat{\mathcal{A}}\), \(\mathcal{U}\), \(\mathcal{S}\), and \(\mathcal{V}\), respectively.)
```
**Algorithm 4** Truncated STP-SVD of Tensors
Now, we give a specific error analysis for the truncated STP-SVD of tensors. First, with the same assumptions and symbols in Theorem 4.2, we give tensors truncated STP-SVD a similar representation to (4.10), expressed by
\[\mathcal{A}=\mathcal{U}\ltimes\mathcal{S}\ltimes\mathcal{V}^{H}+\mathcal{E}, \tag{4.12}\]
where \(\mathcal{E}\) is the corresponding error tensor. Next, we have the upper bound for \(\mathcal{E}\),
\[\|\mathcal{E}\|_{F} =\sqrt{\left\|E^{(1)}\right\|_{F}^{2}+\left\|E^{(2)}\right\|_{F} ^{2}+\cdots+\left\|E^{(l)}\right\|_{F}^{2}}\] \[\leqslant\|E^{(1)}\|_{F}+\|E^{(2)}\|_{F}+\cdots+\|E^{(l)}\|_{F},\]
where \(\|E^{(k)}\|_{F}\) (\(k=1,2,\cdots,l\)) denotes the upper bound of error produced by using truncated SVD on \(\hat{\mathcal{A}}(:,:,k)\). This is changed to consider the error upper bound of matrix STP-SVD (see (4.9)). Each \(\|E^{(k)}\|_{F}\) can be given as
\[\|E^{(k)}\|_{F}\leqslant\sqrt{\sum_{i=2}^{q}(\tilde{\sigma}_{i}^{(k)})^{2}}+ \sqrt{\sum_{j=R_{k}+1}^{p}\left\|S_{j}^{(k)}\right\|_{F}^{2}},\]
from (4.9). Suppose \(\tilde{\mathcal{A}}(:,:,k)\) is obtained by (4.1) with \(A=\hat{\mathcal{A}}(:,:,k)\) therein, and \(\tilde{\sigma}_{i}^{(k)}\) with \(i=1,2,\cdots,q\), \(q=min\left\{m_{1}n_{1},m_{2}n_{2}\right\}\) are singular values of \(\tilde{\mathcal{A}}(:,:,k)\). The matrix \(S_{j}^{(k)}\) denotes the error produced by using truncated SVD on \(\hat{\mathcal{A}}(:,:,k)\), and \(p=min\left\{m_{1},n_{1}\right\}\). Then an upper bound for approximation error \(\mathcal{E}\) in (4.12) is
\[\|\mathcal{E}\|_{F}\leqslant\sum_{k=1}^{l}\left(\sqrt{\sum_{i=2}^{q}(\tilde{ \sigma}_{i}^{(k)})^{2}}+\sqrt{\sum_{j=R_{k}+1}^{p}\left\|S_{j}^{(k)}\right\|_{F }^{2}}\right).\]
### Data Compression of New Tensor Decomposition
The algorithms introduced in this paper can be used for data compression. In fact, the dominant cost for our algorithm is the STP-SVD for \(\hat{\mathcal{A}}(:,:,i)\) (the \(i\)-th frontal slice after the Fourier transform of \(\mathcal{A}\)), therefore, when calculating the required storage, we mainly consider STP-SVD for \(\hat{\mathcal{A}}(:,:,i)\). For example, we suppose \(\mathcal{A}\in\mathbb{C}^{m\times n\times l}\) with \(m=m_{1}m_{2}\) and \(n=n_{1}n_{2}\) is a third-order tensor. Then, we need to store \(m\times n\times l\) data for storing \(\mathcal{A}\) in the computer. If we use full T-SVD [14; 15], we need to store \((m+n+1)pl\), \(p=min\left\{m,n\right\}\) data. However, if we use truncated SVD for each \(\hat{\mathcal{A}}(:,:,i)\) in T-SVD algorithm, and truncated at \(r\) every time, then we need to store \((m+n+1)rl\) data. If we choose \(r\ll p\), the truncated T-SVD will achieve data compression to a certain extent. Now, if we approximate \(\mathcal{A}\) by using STP-SVD, the amount of required storage drops to \([(m_{1}+n_{1}+1)q+m_{2}n_{2}]\,l\), \(q=min\left\{m_{1},n_{1}\right\}\). And if we use truncated STP-SVD, and truncated at \(r\) in the same way, then the data we need to store is only \([(m_{1}+n_{1}+1)r+m_{2}n_{2}]\,l\). Compared to T-SVD, our algorithm requires less data when storing third-order tensors, data compression is well implemented therefore. We can see the result specifically in Table 1.
Next we introduce the data compression rate
\[Cr=\frac{N}{N_{O}}, \tag{4.13}\]
where \(N\) denotes the amount of data needs to storage when we use strategy to compress the \(m\times n\times l\) tensor \(\mathcal{A}\), and \(N_{O}\) denotes the original tensor without compression. Obviously, \(Cr<1\) represents compression strategy stores less data compared to storing the entire tensor directly, while \(Cr>1\), the reverse applies. For any \(m\times n\times l\) third-order tensor with \(m=m_{1}m_{2}\) and \(n=n_{1}n_{2}\), the compression rate be showed in the following Table 2.
Now, we assume that \(\mathcal{A}\) is an \(n\times n\times n\) third-order tensor, and taking \(m_{1}\)=\(m_{2}\)=\(n_{1}\)=\(n_{2}\)=\(\sqrt{n}\) in our algorithms. Then, the number of data we need to store is \(n^{3}\) for saving \(\mathcal{A}\). If we use full T-SVD [14; 15] for \(\mathcal{A}\), we need to store \(2n^{3}+n^{2}\) data. For truncated T-SVD, if we truncate at \(r\) every time we decompose \(\hat{\mathcal{A}}(:,:,i)\), then we need to store \(2rn^{2}+rn\) data. If we use Algorithm 3 to approximate \(\mathcal{A}\), the required storage capacity is \(3n^{2}+n^{\frac{3}{2}}\). If we use truncated STP-SVD (Algorithm 4) on \(\mathcal{A}\), and truncated at \(r\) in the same way, then the data we need to store is \(n^{2}+2n^{\frac{3}{2}}r+nr\). We can also give the corresponding data compression rate of \(\mathcal{A}\in\mathbb{C}^{n\times n\times n}\) on the basis of (4.13). The result is listed in Table 3.
\begin{table}
\begin{tabular}{c c} \hline \hline
**Algorithm** & **Required storage** \\ \hline full T-SVD & \((m+n+1)pl\), \(p=min\left\{m,n\right\}\) \\ \hline full STP-SVD & \([(m_{1}+n_{1}+1)q+m_{2}n_{2}]\,l\), \(q=min\left\{m_{1},n_{1}\right\}\) \\ \hline truncated T-SVD & \((m+n+1)rl\) \\ \hline truncated STP-SVD & \([(m_{1}+n_{1}+1)r+m_{2}n_{2}]\,l\) \\ \hline Note: \(m=m_{1}m_{2}\) and \(n=n_{1}n_{2}\). \\ \end{tabular}
\end{table}
Table 1: The algorithms and corresponding required storage for decomposing an \(m\times n\times l\) tensor \(\mathcal{A}\).
## 5 Applications
One of the most important applications of our theoretical knowledge is high-resolution color image compression. Therefore, in this section, we give some numerical experiments related to image compression.
We use peak signal-to-noise ratio (PSNR) [22] and structural similarity (SSIM) [24] to measure the quality of image compression here. Often, after image compression, the output image will differ to some extent from the original image. In order to measure the quality of the processed image, it is common to refer to the PSNR value to determine whether a particular process is satisfactory. And the larger the PSNR value, the better the image quality. The value of PSNR is 30-40dB usually indicates that the image quality is good. SSIM is an indicator to measure the similarity of two images, and its value range is [0, 1]. Of the two images used by SSIM, one is the original uncompressed image and the other is the distorted image after processing. The larger the value of SSIM, the smaller the degree of image distortion.
**Experiment 1** For a \(4000\times 6000\times 3\) color image, we choose the input factors \(m_{2}=4\), \(n_{2}=6\), then use the algorithm introduced in section 4 to compress it. We give a comparison of compres
\begin{table}
\begin{tabular}{c c} \hline \hline
**Algorithm** & **Cr** \\ \hline full T-SVD & \(\dfrac{(m+n+1)p}{mn}\), \(p=min\left\{m,n\right\}\) \\ \hline full STP-SVD & \(\dfrac{(m_{1}+n_{1}+1)q+m_{2}n_{2}}{mn}\), \(q=min\left\{m_{1},n_{1}\right\}\) \\ \hline truncated T-SVD & \(\dfrac{(m+n+1)r}{mn}\) \\ \hline truncated STP-SVD & \(\dfrac{(m_{1}+n_{1}+1)r+m_{2}n_{2}}{mn}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Data compression rate for decomposing \(\mathcal{A}\in\mathbb{C}^{m\times n\times l}\).
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Algorithm** & **Required storage** & **Cr** \\ \hline full T-SVD & \(2n^{3}+n^{2}\) & \(\dfrac{2n+1}{n}\) \\ \hline full STP-SVD & \(3n^{2}+\sqrt{n^{3}}\) & \(\dfrac{3\sqrt{n}+1}{\sqrt{n^{3}}}\) \\ \hline truncated T-SVD & \(2rn^{2}+rn\) & \(\dfrac{2rn+r}{n^{2}}\) \\ \hline truncated STP-SVD & \(n^{2}+2r\sqrt{n^{3}}+nr\) & \(\dfrac{n+2r\sqrt{n}+r}{n^{2}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: The corresponding required storage and compression rate of algorithms for decomposing an \(n\times n\times n\) tensor \(\mathcal{A}\).
sion quality which obtained by using truncated T-SVD with \(R=[200,200,200]^{T}\) and STP-SVD without truncation, respectively, and show the results in Table 4.
It's easy to see that STP-SVD without truncation has almost the same or even better experimental results compared with truncated T-SVD with \(R=[200,200,200]^{T}\).
However, STP-SVD without truncation saves a lot of time.
**Experiment 2** In Table 5, we compare the compression quality which obtained by using truncated T-SVD and truncated STP-SVD for several images with different resolutions. Here we emphasize \(R\) represents different meanings in truncated T-SVD and truncated STP-SVD. For T-SVD, \(R_{i}\) represents the rank taken when SVD is performed on the \(i\)-th frontal slice of the target tensor, while the meaning of \(R\) in STP-SVD is the same as that explained in Definition 4.1. The image scales used for numerical experiments are 4887 \(\times\) 7500 \(\times\) 3, 3000 \(\times\) 3000 \(\times\) 3, 6000 \(\times\) 8000 \(\times\) 3, 4000 \(\times\) 6000 \(\times\) 3, and 5304 \(\times\) 7952 \(\times\) 3, the images results and data results of the numerical experiments are shown below. The first column((a), (d), (g), (j), (m)) of Figure 6 shows the original images, the second column((b), (e), (h), (k), (n)) shows the truncated STP-SVD processed images, and the third column((c), (f), (i), (l), (o)) shows the truncated T-SVD processed images.
**Note 3** : TSTP-SVD and TT-SVD in Table 5 denote truncated STP-SVD and truncated T-SVD, respectively.
It is obvious from the Table 5 that the algorithm introduced in this paper saves a lot of operation time while achieving almost the same compression quality compared with T-SVD.
\begin{table}
\begin{tabular}{c c c} \hline & **STP-SVD** & **Truncated T-SVD** \\ & **without truncation** & **with \(R=[200,200,200]^{T}\)** \\ \hline
**TIME** & 15.888047s & 119.671456s \\ \hline
**Related Error** & 0.2117 & 0.2169 \\ \hline
**PSNR** & 25.3376 & 25.2777 \\ \hline
**SSIM** & 0.7981 & 0.7026 \\ \hline \end{tabular}
\end{table}
Table 4: Comparison of compression quality which obtained by using truncated T-SVD with \(R=[200,200,200]^{T}\) and STP-SVD without truncation for Image 01.
Figure 5: The original image and compressed images obtained by using STP-SVD and truncated T-SVD.
Figure 6: The result images of numerical experiments.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Image Scale** & \(\mathbf{m_{2},n_{2}}\) & \(R\) & **Index** & **TSTP-SVD** & **TT-SVD** \\ \hline & & & TIME & 24.348120s & 43.017733s \\
4887 \(\times\) 7500 \(\times\) 3 & 3, 5 & [50, 50, 50]\({}^{T}\) & Related Error & 0.0442 & 0.0430 \\ & & & PSNR & 35.1389 & 35.3425 \\ & & & SSIM & 0.9764 & 0.9773 \\ \hline & & & TIME & 5.37095s & 10.489978s \\
3000 \(\times\) 3000 \(\times\) 3 & 5, 5 & [50, 50, 50]\({}^{T}\) & Related Error & 0.1311 & 0.1262 \\ & & & PSNR & 26.2742 & 26.5784 \\ & & & SSIM & 0.7358 & 0.7538 \\ \hline & & & TIME & 41.495969s & 110.945373s \\
6000 \(\times\) 8000 \(\times\) 3 & 3, 4 & [100, 100, 100]\({}^{T}\) & Related Error & 0.0597 & 0.0583 \\ & & & PSNR & 28.9128 & 29.0973 \\ & & & SSIM & 0.9548 & 0.9580 \\ \hline & & & TIME & 6.891970s & 28.234216s \\
4000 \(\times\) 6000 \(\times\) 3 & 10, 10 & [50, 50, 50]\({}^{T}\) & Related Error & 0.0396 & 0.0381 \\ & & & PSNR & 33.5539 & 33.8216 \\ & & & SSIM & 0.8520 & 0.8551 \\ \hline & & & TIME & 39.946396s & 97.926987s \\
5304 \(\times\) 7952 \(\times\) 3 & 4, 4 & [100, 100, 100]\({}^{T}\) & Related Error & 0.1173 & 0.1108 \\ & & & PSNR & 26.0268 & 26.5003 \\ & & & SSIM & 0.8696 & 0.8788 \\ \hline \hline \end{tabular}
\end{table}
Table 5: The data results of numerical experiments.
## 6 Conclusion
In this paper, we have introduced a new definition of third-order tensor semi-tensor product. By using this product, we also presented a new type of tensor decomposition strategy and gave the specific algorithm. This decomposition strategy actually generalizes the matrix SVD based on semi-tensor product to third-order tensors. The new decomposition model can achieve data compression to a great extent on the basis of the existing tensor decomposition algorithms. We gave the theoretical analysis and verified it with numerical experiments in our paper.
The advantage of the algorithm which based on semi-tensor product of tensors is that it can achieve data compression by reducing the data storage. The decomposition strategy in this paper will reduce the number of calculations and storage space so that speeding up the operation when we decompose a third-order tensor. While our algorithm certainly has the limitation, that is, we can only approximately decompose the matrix anyway when we decompose a matrix on the basis of semi-tensor product. This will produce errors when we decompose tensors. However, if each frontal slice of target tensor is a low-rank matrix, our algorithm will get a great result. Our decomposition model still has room for improvement. We will investigate how to build a decomposition model based on semi-tensor product for \(p\)-th order tensors with \(p>3\) in future work. And we will consider whether there will be special treatment ideas for tensors with special structures.
| この論文では、3次テンソルに対して半テンソル積を定義します。この定義に基づき、新しいテンソル分解戦略を提案し、具体的なアルゴリズムを記述します。この分解戦略は、半テンソル積に基づいてテンソル SVD を一般化しています。半テンソル積の特徴により、データスケールを圧縮できるため、この手法でデータ圧縮が可能になりました。数値比較結果を提示し、この分解の利点を示しています。 |
2305.10173 | Quantum theory without the Axiom of choice, and Lefschetz Quantum
Physics | In this conceptual paper, we discuss quantum formalisms which do not use the
famous Axiom of Choice. We also consider the fundamental problem which
addresses the (in)correctness of having the complex numbers as the base field
for Hilbert spaces in the K{\o}benhavn interpretation of quantum theory, and
propose a new approach to this problem (based on the Lefschetz principle).
Rather than a Theorem--Proof--paper, this paper describes two new research
programs on the foundational level, and focuses on fundamental open questions
in these programs which come along the way. | Koen Thas | 2023-05-17T12:57:19 | http://arxiv.org/abs/2305.10173v1 | # Quantum theory without the axiom of choice, and Lefschetz quantum physics
###### Abstract.
In this conceptual paper, we discuss quantum formalisms which do not use the famous Axiom of Choice. We also consider the fundamental problem which addresses the (in)correctness of having the complex numbers as the base field for Hilbert spaces in the Kobenhavn interpretation of quantum theory, and propose a new approach to this problem (based on the Lefschetz principle). Rather than a Theorem-Proof-paper, this paper describes two new research programs on the foundational level, and focuses on fundamental open questions in these programs which come along the way.
###### Contents
* 1 Introduction
* 2 Axiom of Choice: mathematical discussion
* 3 Kobenhavn interpretations beyond \(\mathbb{C}\)
* 4 Quantum Lefschetz Principle A
* 5 Automorphisms of \(\mathbb{C}\) and codes
* 6 Eigenvalues, eigenvectors and probabilities
* 6.1 The \(\mathbb{C}\)-algebra \(\mathbb{C}\) and the \(\mathbb{C}\)-algebra \(\mathbb{C}\)
[MISSING_PAGE_POST]
This very discussion certainly is one of the leading threads in this paper, and motivates us to introduce a conjectural Lefschetz quantum principle below.
Secondly, _if_ we assume to work over \(\mathbb{C}\), then often implicitly the infamous Axiom of Choice is used in the mathematical machinery used to describe quantum physics. But since the philosophical discussion underlying quantum physics -- including its many famous thought experiments -- is extremely important for the theory, and since mathematics -- even basic linear algebra -- is so different without the Axiom of Choice, we want to investigate what happens to quantum theory if one does not rely on the Axiom of Choice.
We refer to the essay [3] for a deeper philosophical discussion on this matter.
### Plan of the present paper
In section 2, we will mathematically discuss the Axiom of Choice. In section 3, we tersely describe the author's approach to general quantum theories (over division rings) and we pay particular attention to finite fields and the minimal model. In section 4, we develop a conjectural Lefschetz principle for quantum theory, and to that end we first mathematically discuss the classical Lefschetz principle. The next section is the short section 5, in which we develop some basic mechanisms to develop quantum codes in models with the Axiom of Choice. In the final section, we discuss the impact of not accepting the Axiom of Choice on measurements and probabilities, and devise a number of thought experiments.
### Acknowledgments
The author wishes to thank Karl Svozil and Bill Wootters for a number of interesting and helpful communications.
## 2. Axiom of Choice: mathematical discussion
We start this section with a first formal formulation of the Axiom of Choice (AC):
"For every indexed family \(\left(S_{i}\right)_{i\in I}\) of nonempty sets, there exists an indexed family \(\left(x_{i}\right)_{i\in I}\) such that \(x_{j}\in S_{j}\) for every \(j\in I\)."
Let us look at a first illuminating example. Suppose each \(S_{i}\) is a subset of the positive integers \(\mathbb{N}\); then we could define \(x_{i}\) as the smallest number in \(S_{i}\). The function which assigns to each \(S_{j}\) the element \(x_{j}\) is called a _choice function_. In this case, we do not need to invoke the Axiom of Choice in order to fulfil the desired property. But in case we define \(\left\{S_{i}\mid i\in I\right\}\) as the set of nonempty subsets of the reals \(\mathbb{R}\), no such a choice function is known.
### Russel's socks
Bertrand Russel gave a particularly entertaining example where one has to invoke the Axiom of Choice in order to define a choice function. Suppose one has an infinite collection of pairs of socks, where we assume that in one pair of socks there is no way to distinguish between the socks. Then in order to select one sock from each pair, we need to invoke AC. Note that if we had started from pairs of shoes instead of socks, we _could_ have defined a choice function ("select the left shoe").
### Choice functions and Cartesian products
In terms of choice functions, we can formulate AC as follows:
"For any set \(\mathcal{X}\) of nonempty sets, there exists a choice function \(f\) defined on \(\mathcal{X}\) which maps each set of \(\mathcal{X}\) to an element of that set."
Obviously, each such choice function \(f\) defines an element of the Cartesian product of the sets in \(\mathcal{X}\), so that we can give another equivalent statement:
"For any set \(\mathcal{X}\) of nonempty sets, the Cartesian product of the sets in \(\mathcal{X}\) is nonempty."
### Vector spaces
The equivalent formulation which interests us the most in the context of the present paper, is the following.
"Every vector space has a basis."
Obviously, in the context of quantum theory such results are extremely important!
Note that upon accepting the Axiom of Choice, one can show that the size of a basis of a given vector space is independent of the choice of basis, and this size yields a notion of _dimension_.
## 3. Kobenhavn interpretations beyond \(\mathbb{C}\)
In classical quantum theory following the Kobenhavn interpretation -- in some papers called "Actual Quantum Theory" (AQT) -- the state space is a Hilbert space (foreseen with the standard inner product). More precisely:
* a physical quantum system is represented by a Hilbert space \(\mathcal{H}=\Big{(}(\mathbb{C}^{\omega},+,\cdot),\langle\cdot,\cdot\rangle \Big{)}\), with \(\langle\cdot,\cdot\rangle\) the standard inner product and \(\omega\) allowed to be non-finite;
* the standard inner product \(\langle\cdot,\cdot\rangle\) sends \(\Big{(}(x_{1},\ldots,x_{\omega}),(y_{1},\ldots,y_{\omega})\Big{)}\) to \(\overline{x_{1}}y_{1}+\cdots+\overline{x_{\omega}}y_{\omega}\) (or \(x_{1}\overline{y_{1}}+\cdots+x_{\omega}\overline{y_{\omega}}\)), where \(\overline{c}\) is the complex conjugate of \(c\in\mathbb{C}\); complex conjugation is an involutary automorphism of the field \(\mathbb{C}\);
* up to complex scalars, pure states (wave functions) are represented by nonzero vectors in \(\mathbb{C}^{\omega}\); usually, one considers normalized vectors;
* time evolution operators are represented by linear operators of \(\mathbb{C}^{\omega}\) that preserve \(\langle\cdot,\cdot\rangle\), that is, _unitary operators_. If \(\omega\) is finite, unitary operators correspond to nonsingular complex \((\omega\times\omega)\)-matrices \(U\) such that \(UU^{*}=\mathrm{id}\);
* measuring an observable \(A\) in a system described by the wave function \(|\psi\rangle\), amounts to collapsing \(|\psi\rangle\) into one of the orthogonal eigenvectors \(|\psi_{i}\rangle\) of the Hermitian operator \(A\), yielding as measurement the corresponding eigenvalue \(\lambda_{i}\);
* composite product states correspond to tensor products \(|\psi_{1}\rangle\otimes|\psi_{2}\rangle\in\mathcal{H}_{1}\otimes\mathcal{H}_{2}\); if a state in \(\mathcal{H}_{1}\otimes\mathcal{H}_{2}\) is not a product state, it is entangled;
* one follows Born's rule, which says that \(|\langle\psi,\psi_{i}\rangle|^{2}\) is the probability that the measurement \(\lambda_{i}\) will be made;
* (\(\cdots\)).
In the rest of this section we tersely explain our approach of [21], which unifies all known modal quantum theories (over finite fields \(\mathbb{F}_{q}\), algebraically closed fields, general division rings with involution).
### The general setting
A _division ring_ is a field for which the multiplication is not necessarily commutative. Sometimes they are called "skew fields," but we prefer the name division ring. In [21] we described a general Kobenhavn approach in which all known classical and modal quantum theories are united in one and the same framework. The main philosophy is that instead of the field of complex number or finite fields, the underlying coordinatizing structures are generalized to division rings, so that we consider generalized Hilbert spaces over division rings instead of complex Hilbert spaces or their finite field analogons. Of course, one has to have a good alternative for the classical inproducts, and this is perfectly possible if we foresee the division rings with a so-called _involution_ (cf. the next subsection). The details can be found in the next subsection.
### Standard \((\sigma,1)\)-Hermitian forms
A "division ring with involution" is a division ring with an involutory anti-automorphism. If \(k\) is a division ring with involution \(\sigma\), the _standard \((\sigma,1)\)-Hermitian form_ on the right vector space \(V(d,k)\), is given by
\[\Big{\langle}x,y\Big{\rangle}\ :=\ x_{1}^{\sigma}y_{1}+\cdots+x_{d}^{\sigma}y_{d}, \tag{2}\]
where \(x=(x_{1},\ldots,x_{d})\) and \(y=(y_{1},\ldots,y_{d})\).
**Remark 3.1**.: In the case that \(\sigma={\rm id}\), we obtain a form which is usually called _symmetric_; it is not a proper Hermitian form, but still comes in handy in some situations (for example in cases of field reduction: "real Hilbert spaces" have often been considered in quantum theory; see e.g. the work of Wootters et al. [24, 25]).
We propose to describe all classical and modal quantum theories in one and the same framework, under the umbrella of "General Quantum Theories" (GQTs), as follows:
\[\left\{\begin{array}{l}\mbox{From now on, we propose to depict a physical quantum system in a general Hilbert space}\\ \mbox{\Large$\frac{\mathcal{H}=\left((V(\omega,k),+,\cdot),\langle\cdot,\cdot \rangle\right)$}{\mbox{form.}}$}\end{array}\right\}\]
Following [21], we speak of a _standard GQT_ if given an involution \(\sigma\), the general Hilbert space comes with the standard \((\sigma,1)\)-Hermitian form. As some fields such as the real numbers and the rational numbers do not admit nontrivial involutions, they only can describe "improper" quantum systems.
### Algebraically closed fields
Let \(k\) be any algebraically closed field in characteristic \(0\). It is well known that upon accepting the Axiom of Choice, there exists an involution \(\gamma\) in \({\rm Aut}(k)^{\times}\), where \({\rm Aut}(k)\) denotes the automorphism group of \(k\). Now consider the set
\[k_{\gamma}:=\{\kappa\in k\mid\kappa^{\gamma}=\kappa\}. \tag{3}\]
One easily shows that \(k_{\gamma}\), endowed with the addition and multiplication coming from \(k\), is also a field. There is however more by [2]:
**Theorem 3.2** (\((\mathbb{C},\mathbb{R})\)-Analogy -- algebraically closed version in char. \(0\)).: _Let \(k\) be any algebraically closed field in characteristic \(0\). Let \(\gamma\) be an involution in \({\rm Aut}(k)^{\times}\). Then \(-1\) is not a square in \(k_{\gamma}\). Suppose \(i\in k\) is such that \(i^{2}=-1\). Then \(k=k_{\gamma}+i\cdot k_{\gamma}\) and \([k:k_{\gamma}]=2\)._
So each element of \(k\) has a unique representation as \(a+bi\), with \(a,b\in k_{\gamma}\) and \(i\) a fixed solution of \(x^{2}=-1\). Fields which have index \(2\) in their algebraic closure are called _real-closed fields_, and can always be constructed as a \(k_{\gamma}\) of some involution \(\gamma\). Real-closed fields share many properties with the reals \(\mathbb{R}\): each such field is _elementarily equivalent_ to the reals, which by definition means that it has the same first-order properties as the reals. We call a GQT _complex-like_ if it is defined over an algebraically closed field \(k\) with nontrivial involution \(\gamma\), where the elements of \(k\) are represented in Theorem 3.2 with respect to the field \(k_{\gamma}\).
**Remark 3.3**.: The analogy goes even further: once we have defined \(k_{\sigma}\) as above, and we represent each element \(x\) in \(k\) as \(x=u+iv\) (which can be done by Theorem 3.2), it can be shown that the automorphism \(\sigma\) is given by
\[\sigma:\;k\mapsto k:\;u+iv\mapsto u-iv\;\;\;\mbox{(complex conjugation)}. \tag{4}\]
### Extension of quantum theories
If we consider a GQT \(\mathcal{T}\) over a field \(k\) in characteristic \(0\), the following fundamental question arises:
Is \(\mathcal{T}\)_embeddable_ in a complex-like theory (or in any other GQT, for that matter)?
Here, the notion of "embeddable" is obvious: if \(k\) comes with the involution \(\gamma\), we want to have a field extension \(\ell\Big{/}k\) for which \(\ell\) is algebraically closed, and an involution \(\overline{\gamma}\) of \(\ell\) for which the restriction of \(\overline{\gamma}\) to \(k\) precisely is \(\gamma\). Since any GQT depends only on the Hermitian matrix of the \((\sigma,1)\)-Hermitian form with respect to a chosen basis (with suitable adaptation to the infinite-dimensional case), it is clear that if the aforementioned GQT comes with matrix \(A\) over \(k\), then the same matrix \(A\) defines a \((\overline{\gamma},1)\)-Hermitian form over \(\ell\) which induces the initial form over \(k\).
**Observation 3.4**.: _If we fix the dimension of the Hilbert space, then any GQT over \(k\) (and with respect to \(\gamma\)) is part of the GQT over \(\ell\) (with involution \(\overline{\gamma}\))._
The reason why a good extension theory is desired, is explained in the next paragraph.
**COMPARISON THEORY**.: Any two fields \(k\) and \(k^{\prime}\) of the same characteristic are contained (as a subfield) in a field \(\ell\). The following construction is simple: let \(\wp\) be the prime field in both \(k\) and \(k^{\prime}\) (isomorphic to \(\mathbb{Q}\) in characteristic \(0\) and to \(\mathbb{F}_{p}\) in characteristic \(p>0\)), generated by \(0\) and \(1\). Then put \(k=\wp(S)\) and \(k^{\prime}=\wp(S^{\prime})\), with \(S\) (resp. \(S^{\prime}\)) a basis over \(\wp\) of \(k\) (resp. \(k^{\prime}\)) consisting of algebraic and transcendental elements over \(\wp\). Then \(\wp(S\cup S^{\prime})\) is "the" desired field. Obviously such a field \(\ell\) with the extension property is not unique, since \(\ell\) can be extended indefinitely (for instance, by adding transcendental elements to \(\ell\)). If we have formulated a good extension formalism for general quantum theories (over algebraically closed fields), we would be able to evaluate problems formulated over \(k\) and \(k^{\prime}\) in one and the same theory formulated over \(\ell\) (and any of its extensions). In that way, if we fix the characteristic of the fields, we could look at a quantum theoretical setting prepared over different fields \(k\) and \(k^{\prime}\) as being two guises of the same story: just extends the GQTS over \(k\) and \(k^{\prime}\) to the appropriate GQT over \(\ell\). Since it is sometimes preferable to work over algebraically closed fields, we want essentially that the following diagram commutes, after the map \(\mathsf{GQT}\) is applied which associates with each couple \((\rho,\phi)\) (where \(\rho\) is a field or division ring, and \(\phi\) an involution of \(\rho\)) the corresponding standard general quantum theory. (Of course, the same ideas can be expressed for non-standard GQTs as well.)
**SCHNOR'S RESULT ON INVOLUTIONS**.: The bad news is that a general comparison dream cannot hold, as was shown in [21]. However, the result is true if we suppose \(k\) to _be algebraically closed to begin with_, by a result of Schnor [16] (which says that if \(\ell\big{/}k\) is a field extension of algebraically closed fields, and \(\gamma\) is an involution of \(k\), then there exists an involution \(\overline{\gamma}\) of \(\ell\) which fixes \(k\) and induces \(\gamma\) in \(k\)).
**Theorem 3.5** (Embedding Theorem of quantum theories [21]).: _Any GQT over an algebraically closed field \(k\) with involution \(\gamma\) is embeddable in a GQT over \(\ell\), where \(\ell\) is any algebraically closed field extension of \(k\)._
In particular, this is also true for AQT: we can embed AQT in an infinite number of "nonisomorphic" GQTs over algebraically closed fields in characteristic \(0\), and this aspect adds an infinite number of layers to the theory which can be used in various situations (such as quantum coding schemes).
### The minimal model: \(\overline{\mathbb{Q}}\)
Recall the following basic result:
**Theorem 3.6**.: _Let \(k\) be any field. If \(k\) is not finite, then \(|\overline{k}|=|k|\), where \(\overline{k}\) is an algebraic closure of \(k\); if \(k\) is finite, \(\overline{k}\) is countable._
It's important to note that the statement relies on the Axiom of Choice!
Since \(\mathbb{Q}\) is the unique prime field in characteristic \(0\), each field of characteristic \(0\) contains \(\mathbb{Q}\), and hence each algebraically closed field \(k\) in characteristic \(0\) contains the algebraically closed field \(\overline{\mathbb{Q}}\) as well. By Theorem 3.6, \(\overline{\mathbb{Q}}\) is countable, and hence it is also minimal with respect to being algebraically closed.
**Observation 3.7**.: _The set of GQTs over \(\overline{\mathbb{Q}}\) can be considered as the set of minimal models (over algebraically closed fields) in characteristic \(0\)._
The idea is that by Theorem 3.5, we know that any minimal GQT can be embedded in a GQT over any given algebraically closed field in characteristic \(0\). And also, if \(k\) is algebraically closed in characteristic \(0\) and we consider a GQT over \(k\) with involution \(\sigma\), one observes that \(\sigma\) fixes the prime field \(\mathbb{Q}\), and so it also fixes the algebraic closure \(\overline{\mathbb{Q}}\).1
Footnote 1: Note that the induced action might be trivial, in which case the induced quantum theory over \(\overline{\mathbb{Q}}\) is orthogonal/symmetric.
In fact, we can do a bit better:
**Observation 3.8**.: _Each general quantum theory \(\mathcal{T}\) over an algebraically closed field \(k\) in characteristic \(0\) induces a general quantum theory over \(\overline{\mathbb{Q}}\)._
Proof.: Suppose \(\sigma\) is the involutory automorphism which comes with \(\mathcal{T}\). As we have seen, it fixes \(\overline{\mathbb{Q}}\). If \(\sigma\) would fix \(\overline{\mathbb{Q}}\) elementwise, then we obtain a nontrivial element of order \(2\) in \(\mathsf{Gal}(\mathbb{C}/\overline{\mathbb{Q}})\) (where the latter denotes the automorphism group of \(\mathbb{C}\) which fixes its subfield \(\overline{\mathbb{Q}}\) elementwise), which is a contradiction since this Galois group is known to be torsion-free (that is, contains no nontrivial elements of finite order).
\[\begin{array}{c}\mbox{GQT over $\overline{\mathbb{Q}}$}\quad\xrightarrow{ \longrightarrow}\quad\mbox{GQT over $k$}\end{array} \tag{5}\]
From this important point of view, it feels more natural to consider quantum theory coordinatized by the algebraic closure \(\overline{\mathbb{Q}}\) of the rational numbers. In fact, as the rational numbers are dense in the real numbers, the expression
\[\mathbb{Q}=\mathbb{Q}+i\mathbb{Q}\ \subset\ \overline{\mathbb{Q}}+i\overline{ \mathbb{Q}}\ \subset\ \mathbb{R}+i\mathbb{R}\ =\ \mathbb{C} \tag{6}\]
shows that every element of \(\mathbb{C}\) can be seen as a limit of Cauchy sequences in \(\overline{\mathbb{Q}}\), in other words:
**Observation 3.9** (Universality of the minimal model).: _Classical quantum theory can be perfectly approximated by modal quantum theory over \(\overline{\mathbb{Q}}\), while the latter is countable, also algebraically closed and contained in every division ring (and in particular: field) in characteristic \(0\). _
### Finite fields
In [17], Schumacher and Westmoreland introduced modal quantum theory (MQT) as a finite "toy model" for AQT, in which the underlying field \(\mathbb{C}\) is replaced by a finite field \(\mathbb{F}_{q}\) (where \(q\) is an arbitrary prime power). Inner products in the usual formal sense are not defined on vector spaces over a finite field, and hence this aspect is not covered in [17]. This means that the very notion of "orthogonal states" does not occur in their approach. In [14], vector spaces are considered over finite fields \(\mathbb{F}_{p}\) with \(p\) a prime, for which the following property holds:
\[-1\mbox{ is not a square in $\mathbb{F}_{p}$, but it is in $\mathbb{F}_{p^{2}}$.}\]
The reason is obvious: besides the many similarities between \(\left(\mathbb{C},\mathbb{R}\right)\) and \(\left(\mathbb{F}_{p^{2}},\mathbb{F}_{p}\right)\), one disposes of a natural Hermitian bilinear form which shares important aspects with the inner product \(\left\langle\cdot,\cdot\right\rangle\). In [21] we showed that there is no need at all for restricting the theory to primes with the aforementioned property. Here is a quick overview.
* Let \(q\) be any prime power; then up to isomorphism \(\mathbb{F}_{q}\) has a unique extension of degree \(2\), namely \(\mathbb{F}_{q^{2}}\). The map (7) \[\gamma:\mathbb{F}_{q^{2}}\mapsto\mathbb{F}_{q^{2}}:a\mapsto a^{q}\] sends each element of \(\mathbb{F}_{q}\) to itself, while being an involutory automorphism of \(\mathbb{F}_{q^{2}}\).
* Let \(n\) be any positive integer different from \(0\); then if \(V=V(n,q^{2})\) is the \(n\)-dimensional vector space over \(\mathbb{F}_{q^{2}}\), define for \(x=(x_{1},\ldots,x_{n})\) and \(y=(y_{1},\ldots,y_{n})\) in \(V\), (8) \[\left\langle x,y\right\rangle:=x_{1}^{\gamma}y_{1}+\cdots+x_{n}^{\gamma}y_{n}.\]
* For \(\rho,\rho^{\prime}\in\mathbb{F}_{q^{2}}\) we have that (9) \[\left\langle\rho x,\rho^{\prime}y\right\rangle=\rho^{\gamma}\Big{\langle}x,y \Big{\rangle}\rho^{\prime},\text{ and }\Big{\langle}x,y\Big{\rangle}^{\gamma}=\Big{\langle}y,x \Big{\rangle}.\]
The following observation is taken from [21].
**Observation 3.10**.: _The linear \((n\times n)\)-matrices \(U\) which preserve the form \(\left\langle\cdot,\cdot\right\rangle\) precisely are unitary matrices: \((n\times n)\)-matrices \(U\) for which \(U^{*}U\) is the \((n\times n)\)-identity matrix, where \(U^{*}:=(U^{\gamma})^{T}\)._
Classical quantum theory compares with MQT over finite fields as follows.
**Remark 3.11** (\((\mathbb{C},\mathbb{R})\)-Analogy -- finite fields version).: In this model of QT, \(\mathbb{F}_{q^{2}}\) plays the role of \(\mathbb{C}\), \(\mathbb{F}_{q}\) the role of \(\mathbb{R}\), \(\gamma\) the role of complex conjugation, and \(\left\langle\cdot,\cdot\right\rangle\) the role of inner product. By choosing any element \(\kappa\) in \(\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q}\), we can represent each element of \(\mathbb{F}_{q^{2}}\) uniquely as (10) \[a+\kappa b,\] with \(a,b\in\mathbb{F}_{q}\). So viewed from this representation, the situation at least looks "a little classical."
### The base field in fixed characteristic: quantum Lefschetz
If we agree that in the Kobenhavn interpretation we need an algebraically closed field \(\ell\) for describing a. o. a theory of observables based on eigenvalues and eigenvectors, we still need to decide in which _characteristic_ it lives. Once a characteristic \(p\geq 0\) is fixed, another fundamental question emerges:
Which base field do we select among the division rings of characteristic \(p\)?
The very question as of which base field is needed in the Kobenhaving interpretation has a long history, and has been considered in many papers. Here is a very short overview of a number of authors which consider the "base field question" in recent work, as we compare it to our own GQT-approach in [21].
**BARRET/HARDY**: \(\bigcirc\) Both Hardy and Barret see states as probability vectors in some vector space \(V\), and as the probability entries are real numbers (contained in the interval \((0,1)\)), \(V\) is assumed to be a real vector space [6, 10]. This means that the underlying algebraic structure is assumed to contain the field of real numbers.
In the unifying viewpoint of GQTs [21], probabilities are manifestations of the Hermitian form (through the generalized Born rule, e.g.), and the field or division ring one uses as underlying algebraic structure (whatever it is).
In [10], two integer parameters \(K\) and \(N\) emerge for which the identity \(K=N^{2}\) holds. If one considers underlying algebraic structures such as the real numbers \(\mathbb{R}\), the complex numbers \(\mathbb{C}\) or the quaternions \(\mathbb{H}\), only \(\mathbb{C}\) confirms the aforementioned identity. Hardy concludes that this -- at least intuitively -- points towards the complex numbers, without providing a formal proof [11]. But the identity \(K=N^{2}\) was not considered in the entire realm of fields and division rings in characteristic \(0\) -- only over the set \(\{\mathbb{C},\mathbb{R},\mathbb{H}\}\). (On the other hand, assuming the probabilities to be rational numbers in \((0,1)\) would also yield more flexibility for \(V\).) Barret's generalized probabilistic theories are based on Hardy's axiomatic approach, so he ends up with the complex numbers as well.
In our approach of GQTs [21], we also work with vector spaces, but any division ring (with involution) is allowed to be the coordinatizing agent, so as to find unifying behavior in this universe of quantum theories. The no-cloning result of [21], for instance, solely follows from the concept of linearity/superposition and works for all division rings -- hence also fields and algebraically closed fields, such as in particular \(\mathbb{C}\). It shows that no-cloning is not a particular instance at all of quantum theory represented in the framework of complex numbers.
**CASSINELLI
We also refer to [1, 5, 8, 19] (from that paper) for relevant papers.
The discussion of the current subsection motivates us to introduce a _quantum Lefschetz formalism_ in the next section.
## 4. Quantum Lefschetz Principle A
In this section we introduce the quantum Lefschetz principle. We first explain the _mathematical_ Lefschetz principle.
### Lefschetz principle
The principle in its most naive but perhaps clearest form states the following:
"Every true statement about an algebraic variety over the complex numbers \(\mathbb{C}\) is also true for a algebraic variety over _any_ algebraically closed field \(\ell\)."
In fact, the naive formulation of the main principle is even stronger: to check whether a statement about an algebraic variety over an algebraically closed field \(\ell\) in characteristic \(0\) is true, it is sufficient to check it over \(\mathbb{C}\) (whatever that means). Lefschetz, in his book [13], states:
"In a certain sense, algebraic geometry over a ground field of characteristic \(0\) may be reduced to complex algebraic geometry."
(Recall the discussion in the introduction concerning the "choice of \(\mathbb{C}\).") As Seidenbergh remarks in [18], the situation is not quite as simple as Lefschetz claims. He describes the following beautiful yet simple example: consider two curves defined by equations \(f(X,Y,Z)=0\) and \(g(X,Y,Z)=0\) in the projective plane over the algebraically closed field \(k\) of characteristic \(0\). In the complex case, it is well known that such curves meet in at least one point: there exist numbers \(a,b,c\in\mathbb{C}\) such that
\[f(a,b,c)\;=\;0\;=\;g(a,b,c). \tag{11}\]
According to Lefschetz, we could conclude the same for \(k\). Indeed, assume that \(k\) is the algebraic closure of a field which is finitely generated over \(\mathbb{Q}\), and which is contained in \(\mathbb{C}\), and hence that \(k\) is a subfield of \(\mathbb{C}\) (cf. Theorem 4.1 below). We then conclude that the curves have a point in common (because they have a point in common over \(\mathbb{C}\)). Only now, the situation has changed: the property we were set to obtain was that the curves _have_ a point in common _over_\(k\), but now, we can only conclude that they have a point in common in some extension field of \(k\) inside of \(\mathbb{C}\)!
So we obviously need more precise statements in order to grasp its depth. The essence of the principle becomes much clearer if we first look at the--precise-- baby version first:
**Theorem 4.1** (Baby Lefschetz Principle).: _Let \(k\) be a field of characteristic \(0\) which is finitely generated over the field of rationals \(\mathbb{Q}\). Then there is an isomorphism of fields_
\[\gamma:\;k\;\mapsto\;\gamma(k)\]
_from \(k\) to a subfield \(\gamma(k)\) of the field of complex numbers \(\mathbb{C}\)._
This statement--although simple-- is extremely powerful once we start thinking about its consequences. Lefschetz observed that every equation over an algebraically closed field \(\ell\) of characteristic \(0\) is defined by a finite number of coefficients, which generate a field \(\widetilde{\ell}\) which is finitely generated over \(\mathbb{Q}\), and whence Theorem 4.1 applies. The idea is very deep: although we start in any algebraically closed field of characteristic \(0\), the typical problems which occur in the theory of algebraic varieties involve only finite data, and using Theorem 4.1 we obtain a principle which _transfers_ the problem to the complex numbers. Unfortunately, as we have seen, Lefschetz's initial version was not precise. Tarski came up with a solution in [20]:
**Theorem 4.2** (Minor Lefschetz Principle).: _The theory of algebraically closed fields of characteristic \(0\) admits quantifier elimination, and therefore all models are elementary equivalent._
To be clear, we also provide the following equivalent formulation.
**Theorem 4.3** (Minor Lefschetz Principle, version 2).: _If an elementary sentence holds for one algebraically closed field of characteristic \(0\), then it holds for every algebraically closed field of characteristic \(0\)._
We recall the notion of _elementary sentence_ for the convenience of the reader. Let \(\ell\) be any field. An _atomic formula_ relative to \(\ell\) is an expression of the form \(f=0\), where \(f\) is a polynomial with coefficients in \(\ell\). By a _formula_ (again relative to \(\ell\)) we mean an expression built up in a finite number of steps from atomic formulae by means of conjunction, negation and quantifiers of the form "there exists an \(x\) such that," where \(x\) varies over an algebraically closed field \(L\) containing \(\ell\). An _elementary sentence_ (relative to \(\ell\)) then is a formula involving no free parameters.
Another very interesting variation of Lefschetz's principle is the following.
**Theorem 4.4** (Algebraic Lefschetz Principles).: _Let \(\Phi\) be a sentence in the language \(\mathcal{L}_{r}=\{0,1,+,-,\cdot\}\) for rings, where \(0,1\) are constants and \(+,-,\cdot\) are binary functions. The following are equivalent:_
* \(\Phi\) _is true over every algebraically closed field in characteristic_ \(0\)_;_
* \(\Phi\) _is true over some algebraically closed field in characteristic_ \(0\)_;_
* \(\Phi\) _is true over algebraically closed fields in characteristic_ \(p\neq 0\) _for arbitrarily large primes_ \(p\)_;_
* \(\Phi\) _is true over algebraically closed fields in characteristic_ \(p\neq 0\) _for sufficiently large primes_ \(p\)_;_
Unfortunately, although the Minor Lefschetz Principle is very general (and even still more general versions are known), not every statement carries over just like that. For example, the statement that the cardinality (number of rational points) of every variety is at most that of the continuum is true over \(\mathbb{C}\), but obviously this is not true over algebraically closed fields with greater cardinality than \(\mathbb{C}\).
### Algebraically closed fields in quantum theory
Even if we agree to work with algebraically closed fields as the base field to describe quantum theory in the Kobenhavn interpretation, why would we have to use the complex numbers? For every prime number \(p\) (including \(0\)) and every cardinal number \(\kappa\), there is an algebraically closed field \(\ell\) which lives in characteristic \(p\) and for which \(|\ell|=\kappa\). And even if the field is not algebraically closed, there are a number of valuable quantum theories around which have much in common with classical quantum theory, but which also still behave differently. For instance, as William Wootters explained to me, quantum theory over the reals contains mysterious features which have not been explained to this day. If we write a prepared state \(|\psi\rangle\) relative to an othonormal eigenbase as \((c_{1},c_{2},\ldots,c_{d})\), and each \(c_{k}\) as \(r_{k}e^{i\phi_{k}}\), then only the real vector \((r_{1},r_{2},\ldots,r_{d})\) contains the information about probabilities. Is there an underlying physical reason?
**Remark 4.5**.: Suppose we write each \(c_{k}\) as \(a_{k}+ib_{k}\) (\(a_{k},b_{k}\) real). Then all states which "probability project" on \((r_{1},r_{2},\ldots,r_{d})\) are precisely of the form \((a_{1},b_{1},\ldots,a_{d},b_{d})\) for which \(a_{k}^{2}+b_{k}^{2}=r_{k}^{2}\) for all \(k\) (while the \(r_{k}^{2}\) sum up to 1). So they are precisely the points of a so-called \(2d\)-dimensional Clifford torus.
In each of the approaches we have seen in subsection 3.7, either it is (sometimes implicitly) assumed that the reals are contained in \(\ell\), or a number of properties are assumed which in the end hold for a Kobenhavn theory over the complex numbers (but not necessarily characterize the field uniquely as the complex numbers). We propose a totally different approach. If we start from any given algebraically closed field \(\ell\)--say, in characteristic \(0\) to fix ideas--then maybe a reasoning similar to that of Lefschetz might enable us to transfer the entire quantum theory described in the language of the Kobenhavn interpretation over \(\ell\), to the Kobenhavn interpretation over \(\mathbb{C}\). So a sufficiently subtle Lefschetz theory in the spirit of the previous theorems might give us an answer.
**Question 4.6**.: _Can we develop a Lefschetz principle for quantum theory, so as to show that one can indeed consider the complex numbers as a suitable field of coordinates?_
An answer would largely settle the base field question in quantum theory. For instance, how much of complex quantum theory can be described in first order logic over \(\mathbb{C}\) (plus some appropriate induction principle)?
Currently we are developing an answer to this fundamental question.
## 5. Automorphisms of \(\mathbb{C}\) and codes
The most commonly used automorphism of \(\mathbb{C}\) in quantum theory is complex conjugation, but there are many others in all models of Zermelo-Fraenkel set theory plus AC. In fact, so many that the structure of \(\operatorname{Aut}(\mathbb{C})\) is very hard to understand. On the other hand, in models without AC, it is consistent to say that \(|\operatorname{Aut}(\mathbb{C})|=2\) -- that is, that standard complex conjugation is the only nontrivial automorphism of \(\mathbb{C}\)1 Strangely enough, the immense size and complexity of \(\operatorname{Aut}(\mathbb{C})\) (upon accepting AC) is virtually never used in quantum theory, while for instance in quantum coding theory it would be a powerful tool. But even if \(|\operatorname{Aut}(\mathbb{C})|=2\), nice things can happen. The projective semi-linear group \(\operatorname{\sf PTL}_{N}(\mathbb{C})\) acts on the (\((N-1)\)-dimensional) state space \(\operatorname{\sf PG}(N-1,\mathbb{C})\), and can select states based on the occurrence of fixed points; if \(|\operatorname{Aut}(\mathbb{C})|=2\), one can understand and control the action much better in this context. If we work in such a model, it is easy to show that for each automorphism \(\varphi\) of the state space (that is, each element of \(\operatorname{\sf PTL}_{N}(\mathbb{C})\)), we have that at least one of \(\varphi\) or \(\varphi^{2}\) is an element of the projective general linear group, and so has fixed points as the field \(\mathbb{C}\) is algebraically closed. This is a very interesting property in the context of selection processes, and hence also of quantum codes. We will come back to these codes in a future paper [22].
## 6. Eigenvalues, eigenvectors and probabilities
We start this section with a construction taken from Brunner et al. [4], of weird vector spaces upon not accepting AC.
### A particular example from Brunner et al.
Let \(S\) be a set. Then \(\ell_{2}(S)\) is defined as
\[\ell_{2}(S)\ =\ \{x\in\mathbb{C}^{S}\ |\ \parallel x\parallel_{2}<\infty\}, \tag{12}\]
where \(\parallel x\parallel_{2}=\sup_{E\subseteq S\text{ finite}}\sqrt{\sum_{s\in E}|x(s)|^{2}}\), and where we have identified \(x\in\mathbb{C}^{S}\) with a map \(x:S\mapsto\mathbb{C}\).
Now let \(\mathcal{R}:=\{P_{n}=\{a_{n},b_{n}\}\ |\ n\in\omega\}\) be a collection of Russel's socks (upon not accepting AC). (In fact, we assume a stronger version of Russel's socks, as in SS1.1 of [4].) Let \(\Omega:=\cup_{n\in\omega}P_{n}\) be the set of all socks, and define
\[\mathcal{L}\ :=\ \{x\in\ell_{2}(\Omega)\ |\ (\forall n\in\omega)\ x(a_{n})=-x(b _{n})\}. \tag{13}\]
In [4] it is shown that \(\mathcal{L}\) is an irreflexive complex Hilbert space, so that an operator on \(\mathcal{L}\) cannot be equal to its adjoint and the usual Hilbert space formalism of quantum theory fails to work. Brunner, Svozil and Baaz argue in [4] that such Hilbert spaces have to be taken into account, through the following thought experiment.
### Identical particles
Before describing the thought experiment of [4], we recall some theory about identical particles.
We say that two particles are _identical_ if all their intrinsic properties such as mass, spin, charge, etc. are exactly the same. The configuration space of \(N\) identical particles is defined by
\[\mathcal{C}(d,N)\ :=\ \Big{(}\times_{N}\mathbb{R}^{d}\setminus\Delta\Big{)} \Big{/}\mathrm{Sym}(N). \tag{14}\]
Here, the particles live in \(\mathbb{R}^{d}\), \(\times_{N}\mathbb{R}^{n}\) denotes the cartesian product \(\underbrace{\mathbb{R}^{n}\times\cdots\times\mathbb{R}^{n}}_{N\text{ times}}\); \(\Delta\) is the subspace of points for which at least two "coordinates" (in \((\mathbb{R}^{d})^{N}\)) are the same (_mathematical explanation_: to remove singularities; _physical explanation_: identical particles cannot occupy the same "location" in space, in the sense that no projection on coordinate axes may coincide); and finally, \(\mathrm{Sym}(N)\) is the symmetric group on \(N\) letters (which has to be divided out since we cannot distinguish between the particles). The fundamental group of \(\mathcal{C}(d,N)\) gives information about the continuous trajectories between the particles.
_Example._ Let \(d=1\) and \(N=2\) (two particles moving on a line). Then \(\mathcal{C}(1,2)\) is homeomorphic to the space \(\Big{(}\mathbb{R}\times\mathbb{R}\setminus\Delta\Big{)}\Big{/}\mathrm{Sym}(2)\), in which particles \((u,v)\) and \((v,u)\) are identified, and where \(\Delta\) is defined by the line \(u=v\). So we obtain the half-plane defined by \(u>v\).
Finally, a particle that follows Fermi-Dirac statistics is called a _fermion_; generally such particles have a half-odd integer spin.
**Thought experiment.** View \(\{a_{n},b_{n}\}\) as an assembly of identical noninteracting spin-\(\frac{1}{2}\) particles which obey the Fermi-Dirac statistics. Its Hilbert space \(\mathcal{H}_{n}\) is defined as
\[\mathcal{H}_{n}\ :=\ \Big{\langle}e_{1}(a_{n})\otimes e_{2}(b_{n})-e_{2}(a_{n}) \otimes e_{1}(b_{n})\Big{\rangle}, \tag{15}\]
and is isomorphic to \(\mathcal{L}_{n}=\{x\in\ell_{2}\Big{(}\{a_{n},b_{n}\}\Big{)}\ |\ x(a_{n})+a(b_{n})=0\}\). The family of all socks is viewed as the compound system of the distinguishable assemblies. Their Fock space is
\[\mathcal{F}\ =\ \oplus_{N\in\omega}\Big{(}\otimes_{n\in N}\mathcal{H}_{n} \Big{)}. \tag{16}\]
The space \(\mathcal{F}\) and \(\mathcal{L}=\oplus_{n}\mathcal{L}_{n}\) are counter examples to several assertions in Hilbert space theory [4]:
* both spaces do not admit an infinite orthonormal (Schauder, cf. the next subsection) eigenbase, so there is no way to choose a mode of observation (in the sense of Bohr's complementarity interpretation); there is no Hamel base either;
* as the vector space duals of \(\mathcal{F}\) and \(\mathcal{L}\) are different from \(\mathcal{F}\) and \(\mathcal{L}\), there is no notion of self-adjoint operator in both \(\mathcal{F}\) and \(\mathcal{L}\).
### Schauder bases
In infinite-dimensional Hilbert spaces, quantum theorists usually work with Schauder bases instead of Hamel bases. Hamel bases are the usual bases considered in linear algebra, but Schauder bases are altogether somewhat different. We say that \(\mathcal{B}\) is a _Schauder basis_ of the infinite-dimensional Hilbert space \(\mathcal{H}\), if each vector can be represented as a tuple in \(\mathcal{B}\) with at most a countable number of nonzero coefficients. (Each vector can be obtained as a convergent series of vectors generated by \(\mathcal{B}\) seen as a Hamel base of a subspace of \(\mathcal{H}\), so that the subspace generated by \(\mathcal{B}\) as a Hamel base is dense in \(\mathcal{H}\).) We say that a Hilbert space is _separable_ if it contains a countable Schauder basis. It can be shown that all infinite-dimensional separable Hilbert spaces are isometrically isomorphic to \(\ell^{2}\). Note also that by Baire categoricity, one can show that the Hamel dimension of a complex Hilbert space is always finite or uncountable! Here, "Hilbert" is important. Suppose \((e_{i})\) is an orthonormal basis of \(\mathcal{H}\) and let \(\{e_{j(t)}\}_{i\in\mathbb{N}}\) be a subset. Then \(\sum_{n=1}^{\infty}\frac{1}{n}e_{j(n)}\in\mathcal{H}\), but it is not expressible relative to \((e_{i})\) as a Hamel basis.
In spaces \(\mathcal{H}\) with a Schauder base \(\mathcal{B}\) Born's probability formalism works perfectly. Note that if \(\mathcal{H}\) is an infinite-dimensional Hilbert space, the dimension refers to its dimension with respect to a Hamel basis, and that the "Schauder dimension" can be different than the usual Hamel dimension.
If one considers an observable \(A\) in an infinite-dimensional Hilbert space, the orthonormal eigenbase of \(A\) is also considered to be a Schauder base.
### Projector operators
A Hermitian operator \(\mathbb{P}\) of a Hilbert space \(\mathcal{H}\) is called a _projector (operator)_ if \(\mathbb{P}^{2}=\mathbb{P}\). (It is a dichotomy operator since it only allows two possible outcomes.) Now consider a Hilbert space \(\mathcal{H}\), and let \(\mathbb{P}\) be the observable which projects any vector onto the subspace \(A\) of \(\mathcal{H}\). Let \(\mathcal{B}\) be a Schauder eigenbase of \(\mathbb{P}\) (taken that it exists). Then \(\mathcal{H}=A\oplus B\), where \(A\) is generated by the eigenvectors in \(A\) and \(B\) is generated by the eigenvectors in \(\mathcal{B}\setminus A\). Let a quantum system be prepared in the state \(|\Psi\rangle\). Upon projecting \(|\Psi\rangle\) onto \(A\), respectively \(B\), we obtain vectors \(|\Psi\rangle_{A}\), respectively \(|\Psi\rangle_{B}\), and we can write
\[|\Psi\rangle\ =\ |\Psi\rangle_{A}\ +\ |\Psi\rangle_{B}. \tag{17}\]
Now expand \(|\Psi\rangle_{A}\), respectively \(|\Psi\rangle_{B}\), in the Schauder base \(\mathcal{B}_{A}\) of \(A\), respectively \(\mathcal{B}_{B}\) of \(\mathcal{B}\), induced by \(\mathcal{B}\) to obtain \(|\Psi\rangle_{A}=\sum_{|\Psi_{i}\rangle\in\mathcal{B}_{A}}a_{i}|\Psi_{i}\rangle\) and \(|\Psi\rangle_{B}=\sum_{|\Psi_{i}\rangle\in\mathcal{B}_{B}}b_{i}|\Psi_{i}\rangle\). Then the probability \(P_{A}\) to measure the eigenvalue \(\lambda=1\) ("YES") and the probability of measuring \(\lambda=0\) ("NO") are given by
\[P_{A}\ :=\ \sum_{|\Psi_{i}\rangle\in\mathcal{B}_{A}}|a_{i}|^{2},\ \ \ P_{B}\ :=\ \sum_{|\Psi_{i}\rangle\in\mathcal{B}_{B}}|b_{i}|^{2}. \tag{18}\]
Note that if \(\mathbb{P}_{i}\) is the projector operator onto the eigenvector \(|\Psi_{i}\rangle\in\mathcal{B}_{A}\), then \(\mathbb{P}\) can be easily described as
\[\mathbb{P}\ =\ \sum_{|\Psi_{i}\rangle\in\mathcal{B}_{A}}|\Psi_{i}\rangle \langle\Psi_{i}|\ =\ \sum_{|\Psi_{i}\rangle\in\mathcal{B}_{A}}\mathbb{P}_{i}. \tag{19}\]
### Double slit experiments: a variation in quantum theory without AC
Young's double slit experiment does not need an introduction: a light beam is fired in a straight line through a panel with two disjoint rectangular slits on a screen. Classical Physics would expect a pattern which corresponds to the size and shape of the slits, but that is not what happens. Instead, an interference pattern occurs. This even happens when the experiment involves single particles: through a double split apparatus, particles are sent one at a time and the interference pattern emerges eventually as well. Although the particles are measured as a single pulse in a single position, a probability wave describes the probability of observing the particle at a specific point \((x,y)\) in the plane of the screen. Born's rule gives us the probability distribution for finding an electron at specific places of the screen. Once an electron hits the screen and is detected, the wave function collapses and the outcome can be described through the eigenvalues of an observable matrix.
Although the following thought experiment is not directly related to the double slit experiment, it still shares some of it (weird) characteristics.
So let \(\mathcal{H}\) be a (generalized) Hilbert space over some field \(k\) in a model of Zermelo-Fraenkel without AC, which allows (Schauder) bases \(\mathcal{B}_{1}\) and \(\mathcal{B}_{2}\) of different cardinalities. This field may not be the complex numbers, but in the context of modal quantum theories, we should still consider this possibility. In any case we know that \(\mathcal{H}\) exist by [12]. Note that \(\mathcal{H}\) necessarily is infinite-dimensional (in the sense that it is not finite-dimensional). Now let \(\tilde{\mathcal{H}}\) be a second Hilbert space over \(k\), and consider \(\widehat{\mathcal{H}}:=\mathcal{H}\oplus\widetilde{\mathcal{H}}\). Let \(\mathbb{P}_{\mathcal{H}}\) be the projector operator of \(\widehat{\mathcal{H}}\) onto \(\mathcal{H}\); this is an observable with eigenvalues \(0\) and \(1\), and acts on \(\mathcal{H}\) as the identity. Now consider a state \(|\Psi\rangle\) in \(\widehat{H}\). After measuring \(\mathbb{P}_{\mathcal{H}}\), \(|\Psi\rangle\) collapses into a state in \(\mathcal{H}\) which is a superposition.
One question now is:
**Question 6.1**.: _What is the probability that a state is contained in \(\mathcal{H}\)? As \(\mathcal{H}\) has no well-defined dimension, we have no idea how "big" \(\mathcal{H}\) is with respect to \(\widehat{\mathcal{H}}\)._
But we can do better.
**Thought experiment: AC black box measurements.** Suppose a quantum system is prepared in a state \(|\Psi\rangle\) in the (possibly generalized) Hilbert space \(\mathcal{H}\) over the field \(k\). We assume that upon not accepting the Axiom of Choice, \(\mathcal{H}\) admits Schauder bases of different cardinalities. Now we are going to perform a measurement corresponding to the Hermitian operator \(A\). Before making the measurement, we do not know whether the Axiom of Choice holds true or not; this can only -- in principle -- be observed after the measurement has been made. (There is a black box which returns the value "\(0\)" or "\(1\).") After the measurement, we obtain an eigenvalue \(\lambda\) with probability \(p_{\lambda}\). If AC holds in the underlying mathematical theory, this outcome is standard. If AC would not hold, \(\lambda\) could have been measured with a different probability (by the formalism below, for instance, \(p_{\lambda}\neq 0\) could be infinitely small).
### New Born formalism and higher Schauder bases
Let \(S\) be an uncountable set, and suppose \(\mathcal{C}:=\{r_{s}\mid s\in S\}\) is a set of positive real numbers. Suppose \(\sum_{s\in S}r_{s}=R\) as a limit is also real. Then in any model of Zermelo-Fraenkel with AC, it can be shown that only at most a countable number of elements in \(\mathcal{C}\) are different from zero. The proof uses the fact that a countable union of finite sets is also countable -- a fact which fails miserably when AC is not around. So it makes sense to define _higher Schauder bases_ as follows. We say that \(\mathcal{B}\) is a _higher Schauder basis_ of the infinite-dimensional Hilbert space \(\mathcal{H}\) if all vectors of \(\mathcal{H}\) can be represented by a unique \(|\mathcal{B}|\)-tuple in which an uncountable number of nonzero entries is allowed and so that such vectors also occur. It follows that \(\mathcal{B}\) is not countable. Consider a state \(|\Psi\rangle=(a_{b})_{b\in\mathcal{B}}\). For Born's formalism to work, we want
\[\sum_{b\in\mathcal{B}}|a_{b}|^{2}\;=\;1. \tag{20}\]
Upon accepting the Axiom of Choice it is easy to show that the latter expression implies that only a countable number of entries in \((a_{b})_{b\in\mathcal{B}}\) is nonzero, and so in \(\mathcal{H}\) we can still consider those states which makes sense in
the quantum-theoretical setting. If we work in Zermelo-Fraenkel set theory without choice however, there are models in which (20) is true, with an uncountable number of entries nonzero [15]! In this formalism, we only consider state vectors \(|\Psi\rangle=\left(a_{b}\right)_{b\in\mathcal{B}}\) for which \(\sum_{b\in\mathcal{B}}|a_{b}|^{2}\in\mathbb{R}\) (before normalization). (If one would work over an algebraically closed field \(\ell\) of characteristic \(0\) which is different from \(\mathbb{C}\), then we ask that \(\sum_{b\in\mathcal{B}}|a_{b}|^{2}\) is contained in the real-closed subfield which is defined relative to the choice of complex conjugation.)
**Question 6.2**.: _Does this formalism cover quantum-theoretical situations which are not possible over classical Schauder bases?_
We suspect that the answer of this question, and also of the next question is "yes."
**Question 6.3**.: _Does it make sense to introduce nonstandard probabilities (e. g., infinitely small probabilities) in this context?_
### Blass's Theorem and Ineffective observables
In [7], Blass showed in Zermelo-Fraenkel set theory that if every vector space has a basis, then AC also holds. He starts with a set \(\mathcal{X}\) of disjoint nonempty sets \(X_{i}\) (\(i\in I\)), picks an arbitrary field \(k\), and constructs the field extension \(k(X)\), where \(X=\cup_{X_{i}\in\mathcal{X}}X_{i}\). Then he constructs a particular subfield \(K\) of \(k(X)\), and interprets \(k(X)\) as a vector space over \(K\). In \(k(X)\) he considers the \(K\)-subspace \(V\) spanned by \(X\). Assuming that \(V\) has a basis, he then constructs a choice function on \(\mathcal{X}\). Unfortunately, it does not follow that AC is deducible from the statement that _every_\(\ell\)-vector space has a basis, with \(\ell\) a specified, fixed field. On the other hand, obviously \(k\), \(k(X)\) and \(K\) live in the same characteristic, so we have the following stronger statement.
**Theorem 6.4** (Blass's Theorem, version 2).: _Let \(p\) be any prime, or \(0\). Then in Zermelo-Fraenkel set theory, AC is deducible from the assertion that every vector space over a field of characteristic \(p\) has a basis._
For quantum theory the importance is obvious: in the classical Kobenhavn formalism, observables (Hermitian operators) collapse into one of the vectors of an orthogonal eigenbase, and the corresponding eigenvalue is the resulting observed value. Upon not accepting AC and working in models of ZF-theory without AC, it could very well happen that some Hilbert spaces \(\mathcal{H}\) over \(k=\mathbb{C}\) (or some other field) do not have a base, so that the formalism of Hermitian observables fails, or needs to be adapted at the very least. In any case, by Theorem 6.4, we may suppose that the characteristic of \(k\) is \(0\). For instance, let \(\mathcal{B}\) be the set of orthonormal eigenvectors of some given observable \(B\) (which cannot be maximal by assumption), and let \(\langle\mathcal{B}\rangle\) be the subspace of \(\mathcal{H}\) generated by \(\mathcal{B}\) over \(k\) (either as a Hamel base or as a Schauder base). By taking a state \(|\Psi\rangle\) outside of \(\langle\mathcal{B}\rangle\), we cannot perform a measurement using the state \(|\Psi\rangle\).
**Remark 6.5** (Quantum Lefschetz Principle B).: Is \(\mathbb{C}\) a candidate? If not, in view of first order logic we may switch to an other algebraically closed field for which Blass's Theorem does work (as such ending up with ineffective observables). | This sentence is quite complex, so it's important to capture all the nuances and technical terms accurately. |
2301.02080 | Semantic match: Debugging feature attribution methods in XAI for
healthcare | The recent spike in certified Artificial Intelligence (AI) tools for
healthcare has renewed the debate around adoption of this technology. One
thread of such debate concerns Explainable AI (XAI) and its promise to render
AI devices more transparent and trustworthy. A few voices active in the medical
AI space have expressed concerns on the reliability of Explainable AI
techniques and especially feature attribution methods, questioning their use
and inclusion in guidelines and standards. Despite valid concerns, we argue
that existing criticism on the viability of post-hoc local explainability
methods throws away the baby with the bathwater by generalizing a problem that
is specific to image data. We begin by characterizing the problem as a lack of
semantic match between explanations and human understanding. To understand when
feature importance can be used reliably, we introduce a distinction between
feature importance of low- and high-level features. We argue that for data
types where low-level features come endowed with a clear semantics, such as
tabular data like Electronic Health Records (EHRs), semantic match can be
obtained, and thus feature attribution methods can still be employed in a
meaningful and useful way. Finally, we sketch a procedure to test whether
semantic match has been achieved. | Giovanni Cinà, Tabea E. Röber, Rob Goedhart, Ş. İlker Birbil | 2023-01-05T14:26:55 | http://arxiv.org/abs/2301.02080v3 | # Semantic match: Debugging feature attribution methods in XAI for healthcare
###### Abstract
The recent spike in certified Artificial Intelligence (AI) tools for healthcare has renewed the debate around adoption of this technology. One thread of such debate concerns Explainable AI (XAI) and its promise to render AI devices more transparent and trustworthy. A few voices active in the medical AI space have expressed concerns on the reliability of Explainable AI techniques and especially feature attribution methods, questioning their use and inclusion in guidelines and standards. Despite valid concerns, we argue that existing criticism on the viability of post-hoc local explainability methods throws away the baby with the bathwater by generalizing a problem that is specific to image data. We begin by characterizing the problem as a lack of semantic match between explanations and human understanding. To understand when feature importance can be used reliably, we introduce a distinction between feature importance of low- and high-level features. We argue that for data types where low-level features come endowed with a clear semantics, such as tabular data like Electronic Health Records (EHRs), semantic match can be obtained, and thus feature attribution methods can still be employed in a meaningful and useful way. Finally, we sketch a procedure to test whether semantic match has been achieved.
Explainable AI Feature attribution Medical AI
## Introduction
Along with the blooming of Artificial Intelligence (AI) and the accompanying increase in model complexity, there has been a surge of interest in explainable AI (XAI), namely AI that allows humans to understand its inner workings [_e.g._, Doshi-Velez and Kim, 2017, Linardatos et al., 2020, Gilpin et al., 2018, Biran and Cotton, 2017, Doran et al., 2017]. This interest is particularly keen in safety-critical domains such as healthcare, where it is perceived that XAI can engender trust, help monitoring bias, and facilitate AI development [_e.g._, Doshi-Velez and Kim, 2017, Lipton, 2018]. XAI has already shown to improve clinicians' ability to diagnose and assess prognoses of diseases as well as assist with planning and resource allocation. For example, Letham et al. [2015] developed a stroke prediction model that matches the performance of the most accurate Machine Learning (ML) algorithms, while remaining as interpretable as conventional scoring methods used in clinical practice.
There is a wide variety of techniques for XAI, and many categorizations have been proposed in the literature. Techniques can roughly be grouped into local vs. global, and model-specific vs. model-agnostic approaches. Local methods aim to explain model outputs for individual samples, while global methods focus on making models more explainable at an aggregate level. Model-specific methods are tailored to explain a specific type of model, while model-agnostic methods
can be applied to a range of different models. Many of the well-known techniques yield post-hoc explanations, meaning that they generate explanations for already trained, so-called 'black-box', models. Alternatively, there exist approaches that are considered inherently explainable, also known as white-box (or glass-box) models, such as decision trees and linear regression models. A detailed taxonomy is beyond the scope of this paper; for an extensive overview we refer the reader to Carvalho et al. (2019); Molnar (2022); and Ras et al. (2022).
A group of XAI techniques that has enjoyed substantial fame is the set of feature attribution methods, namely techniques that assign to each feature a measure of how much it contributes to the calculation of the outcome according to the model. Popular techniques produce such explanations in a local fashion, and among the most famous there are SHAP Lundberg and Lee (2017); LIME Ribeiro et al. (2016), saliency maps Selvaraju et al. (2017), and integrated gradients Sundararajan et al. (2017). To give a sense of the success of these techniques, it suffices to say that some of them are now integrated as default explainability tools in widespread cloud machine learning services, while in areas such as natural language processing, researchers are starting to use such methods as the gold standard against which they judge the quality of other explanations Mohankumar et al. (2020).
Despite the enthusiasm and a growing community of researchers devoting energy to XAI, there is currently no consensus on the reliability of XAI techniques, and several researchers have cast serious doubts on whether XAI solutions should be incorporated into guidelines and standards, or even deployed at all (_e.g._, Lipton, 2018; McCoy et al., 2021; Ghassemi et al., 2021; Neely et al., 2022).
Undoubtedly, there is an inherent tension between the desire for machines performing better than humans, and the requirement for machines to provide human-understandable explanations. Together with their super-human capacities -such as the ability to juggle dozens or hundreds of factors- the sub-symbolic character of statistical learning techniques contributes to rendering many ML models opaque to humans. Simply put: humans cannot'read off' what a neural network has learned just by looking at the matrix of weights.
The internal or 'latent' representation of a machine, namely the way in which the machine encoded the patterns found in the data, is what we would like to explain to the human, together with the way in which this internal representation interacts with a single data point to generate an output. The solution presented by feature attribution methods is also sub-symbolic, in the sense that an explanation also takes the form of a vector or matrix of values, and here lies the source of the problem. As different scholars pointed out Ghassemi et al. (2021); Rudin (2019), the assignment of meaning to such explanations can be tricky, sometimes lulling the humans into a false sense of understanding while the explanations are in fact flawed or misleading. This issue is particularly sensitive when such explainability techniques are used in high-stakes environments such as healthcare.
Does this mean that feature attribution methods are altogether unreliable? In this article we argue that existing criticism on the viability of post-hoc local explainability methods throws away the baby with the bathwater by generalizing a problem that is specific to unstructured data such as images. We characterize the issue with feature attribution methods as a lack of semantic match between explanations and human understanding. To understand when semantic match can be obtained reliably, we introduce a distinction between feature importance of low- and high-level features. We argue that in the case of data types for which low-level features come endowed with clear semantics, such as tabular data like EHRs, semantic match is enabled and thus feature attribution methods can still be employed in a meaningful and useful way. As for high-level features, we present a conceptual procedure to test whether semantic match is achieved, paving the way for future operationalization of this test.
Figure 1: An example of visual explanation by heatmap for a medical image. Image courtesy of (Rajpurkar et al., 2017).
## The criticism on local feature attribution methods
In this section we expand on the problem that feature attribution methods are confronted with. Applied to images, explanations generated by feature attribution methods present themselves as heat maps or colored overlays, indicating the contribution of specific pixels to the prediction of the model on the input at hand. Intuitively, highlighted regions comprise pixels which were considered 'important' by the model (see Figure 1). _Prima facie_, one might be led to believe that this allows humans to check that the model is paying attention to the right elements of the image, therefore increasing our trust in the specific prediction and in the model more generally.
However, when a certain area of an image is highlighted we simply do not know if what we recognize, say the shape of a kidney or the beak of a bird, is the same as what the AI recognizes. As several researchers have pointed out, what look like plausible explanations at first may turn out to be ungrounded or spurious explanations when subjected to closer scrutiny. Figure 2 displays an example of this mishap: very similar explanations are offered for the prediction on two very different classes, invalidating our intuition that the model has learned to recognize dogs by their facial features.
This opens the door for different kinds of biases, both on the side of the user, who can project their beliefs erroneously onto the machine, and on the side of the XAI, which might provide feature attribution information that is inconsistent or misleading.
The root of the problem, we maintain, lies in the inability of humans to attribute meaning to a sub-symbolic encoding of information. What is needed is a systematic way to translate sub-symbolic representations to human-understandable ones, in a way that respects how we assign meaning. This is represented schematically in Figure 3. We call such a commuting diagram a **semantic match**. An ideal explanation would provide content of the bottom-left node and a suitable translation in order to obtain a semantic match: the explanation of a certain sub-symbolic representation encoded by the machine should have (i) a clearly defined meaning and (ii) an unambiguous way to translate to human terms with the same (or very similar) meaning. Semantic match, which is a crude simplification of complex cognitive and linguistic phenomena, offers a handy conceptual tool to debug explanations.
Indeed, just as we are unable to understand a latent representation, we are unable to relate to a heatmap unless it comes paired with a well-defined meaning assignment and translation, as illustrated in the diagram. Overlaying the heatmap to an image encourages us to use our visual intuition as translation, but alas this last step is an ill-advised one, since it gives us the illusion of a semantic match while in fact the explanations do not conform to expectations. Figure 4 exemplifies a failed semantic match. In this scenario a heatmap is generated as an explanation for the behavior of the model on an image. From the image, it may appear that the machine'recognizes' a certain feature (a dog's head), and therefore classifies the image as a specific class ('dog'). However, another spurious input generates the same heatmap, and hence the translation is invalid and unreliable. To replicate the uncanny observation of Figure 2, one can construct another scenario where states of the world are pairs of input images and predictions. These are cases of semantic mismatch: the states of the world in which the heatmap is produced do not correspond to the states where one would plausibly use the concept of dog's head to classify a dog image.
This criticism seems to undercut the utility of such local feature attribution methods: if they are potentially misleading and bring no clear added value, should they be used at all?
Figure 2: While the explanation of the first classification seems intuitive, this impression is put into question when a similar explanation is offered for an absurd classification. Figure reproduced from [Rudin2019] with author’s permission.
## Distinguishing low- and high-level features
In order to understand the utility of such methods, one needs to step back and consider what are the characteristics of the examples that give rise to the problem, and assess whether they generalize. Not unsurprisingly, most of the debate revolves around images, given the astounding success of deep learning in the realm of computer vision. Images are a prime example of what is sometimes called 'unstructured data', namely data that does not come equipped with additional structure and is akin to raw sensory data. Note that this is a misomer, since such data typically is endowed with a rich structure (in the mathematical sense) deriving from the notion of distance defined between pixels or between parts of an ordered sequence. We will nonetheless adopt this terminology to facilitate readability, since it is ubiquitous.
One of the defining characteristics of images -and unstructured data more generally- is that a single feature has no intrinsic meaning: a certain color value of a pixel means nothing by itself. Only a pattern of values for a group of features can be attributed meaning, _e.g._ a cloud of pixels with the shape and color of a dog's head. In other words, we recognize properties or entities in an image by matching activation patterns with our visual intuitions. Following a widespread habit in the ML community, we will refer to patterns in groups of features as 'high-level features', while single features, i.e. the entries of the vector representing an input, will be dubbed 'low-level features' by contrast. With this terminology, we can succinctly state that in images only high-level features can be attributed meaning while low-level features cannot.
On the contrary, in structured data each feature is usually conferred specific meaning; _e.g._ in EHR data a certain value might contain the information pertaining to the blood pressure of a patient at a certain time. When such low-level features are not defined with a specific protocol - such as the ones for measurements of vital signs in clinical settings - they refer to standard concepts in natural language. We can, therefore, easily interpret what these low-level features mean regardless of the values of the other features. For instance, a value of 180 in the feature corresponding to systolic blood pressure gives us a piece of information that we can understand and process, even without knowing other features of a patient.
To be sure, there are also high-level features in the case of structured data. Continuing with the example of EHRs, a high-level feature could be for example a phenotype which is not encoded explicitly as a feature but instead depends on the combination of existing features, say glucose, BMI, and so on. Hence, the crucial difference between structured and unstructured data is that the former has clear meaning for the low-level features, while on the high-level features the two data types behave similarly.
We maintain that the usage of a post-hoc feature attribution method hinges on the application of the corresponding semantic match diagram. Without a rigorously defined meaning and translation, a heatmap remains a matrix of values without rhyme or reason. We cannot extract information from such a matrix just as we cannot fathom what is encoded in the latent space of a neural network by eyeballing the value of a point in said space. So far the attribution of meaning to a heatmap-explanation has been carried out in an informal and intuitive way, which as previous scholars argued is prone to error and a false source of confidence.
Figure 3: The diagram representing a semantic match. Figure 4: A scenario depicting a semantic mismatch. The images eliciting a certain explanation do not comply with our recollection of the explanation.
## Saving feature importance for low-level features
The distinction between low-and high-level features helps untangle the cases in which semantic match works out-of-the box from those in which it fails. A post-hoc local feature attribution method can be used appropriately on low-level features when they have a predefined translation, as is the case for medical tabular data. For example, suppose a risk prediction model is trained on EHR data. When a patient is presented to the model, the model might provide a risk score associated with feature importance values for the patient's lab values. Suppose systolic blood pressure is marked as the most important factor increasing the risk of a specific patient. In this case, there is no ambiguity: such level of importance is attributed to systolic blood pressure and nothing else, and any user with sufficient training knows what systolic blood pressure is. Note that this has nothing to do with how the importance is calculated (_e.g._ if it takes into account feature interactions) or if it is a sensible level of importance; all we are stating is that the user can unambiguously understand what the importance is attributed to. In other words, the importance is attributed to something which semantically matches our concept of'systolic blood pressure'.
The user of such a model can then engage with the feature importance and assess whether it is sensible, while addressing questions like "Is this level of risk reasonable given a high importance of systolic blood pressure?" Such a question may be answered in the positive, if the clinician believes the value of systolic blood pressure is concerning, or in the negative, if for example, the clinician knows that the value of systolic blood pressure is a byproduct of medications she can control. An example of such an approach is displayed in Figure 5, where the top features contributing to clinical risk are displayed along with a color code indicating if they are risk increasing or decreasing.
Crucially, semantic match allows the user of an explanation to engage with it, and to decide whether it is agreeable or not. In contrast, when the same risk prediction is trained on image data, this reasoning falls apart. What feature attribution can highlight are just high-level features, _e.g._ the portion of an image with a kidney shape. But lacking a semantic match, clinicians cannot trust whether what they see in the image matches the machine's internal representation.
## Debugging feature importance for high-level features
Does this mean that we have to give up feature attribution for high-level features? In fact, what we need is a procedure to test whether semantic match is present or if it is violated. In what follows we sketch such procedure at a conceptual level.
Suppose a ML model \(f\) has been trained on labeled data of the shape \((x,y)\), where \(x\) represent an input vector and \(y\) an output. We denote a local feature attribution method with \(M\) and say that \(M(f,x,y)=e\) is the explanation furnished by the model \(f\) on such pair of input and output. We are interested in testing whether we have semantic match with the explanation \(e\), or in other words, if what we'see' in the explanation is indeed what the explanation is capturing.
At an intuitive level, what we want to ascertain is that an explanation matches our translation of it. This is encoded in the commutation of the semantic diagram, namely that all the data points giving rise to a certain explanation are also complying to our translation hypothesis, which we will indicate with \(\theta\), and vice versa. We are thus looking at answering two questions:
* Is every input-output pair that generate an explanation similar to \(e\) also a case in which \(\theta\) is the case?
* Is every input-output pair in which \(\theta\) is satisfied going to generate an explanation that is similar to \(e\)?
To exemplify the procedure, suppose one has developed an algorithm to classify pictures of animals. Presented with a picture classified as a dog and an explanation \(e\), one formulates the translation hypothesis \(\theta\) that the explanation highlights the tail of the animal. To answer the aforementioned questions, one would need to take images that generate explanations similar to \(e\) and check whether the explanations of those samples highlight tails. For the second question, one would need to take images of animals with tails and consider how similar they are to \(e\).
Note that in this procedure one may also select data points whose label is different from \(y\). This may not be an issue, since the high-level feature used for the prediction of \(x\) as \(y\) may in theory also be used on another data point and another label (and hence be highlighted in the explanation). In the animal classification example, one may collect images of a panther in which a tail is displayed, and rule that the explanation correctly highlights a tail in those images.
What is interesting of this procedure is that it breaks down a theoretical issue in more concrete steps. A full formalization of the procedure and the demonstration of its implementation in practice are left for future work.
Figure 5: (a) An example of the front-end of Pacmed Critical, a medical device currently used in the Amsterdam University Medical Center for the prediction of adverse outcomes after ICU discharge. (b) A close-up of the right side of the interface, displaying feature attribution for low-level features. Courtesy of Pacmed; for background on the prediction model see Thoral et al. (2021), de Hond et al. (2022). The displayed patient data is synthetic.
## Discussion
In this article we have reviewed the reliability problem of feature attribution methods and have proposed to diagnose the issue by means of the semantic match diagram. We have argued that without clear meaning and translation, semantic match cannot be obtained for high-level feature importance. A corollary of this statement is that current methods for feature attribution may not be appropriate for unstructured data unless semantic match is verified.
In contrast, structured data may still benefit from feature attribution, since for this data type low-level features have an in-built semantic match with human concepts. This allows humans to engage with explanations and exercise that all-important oversight that is required to spot failure modes of ML applications. Recent fairness concerns in the realms of human resources, healthcare, and law enforcement underscore the need for continued human control over semi-automated decisions.
When it comes to the limit of this analysis, it is important to remark that, even in the presence of semantic match, explanations can still fail to deliver on their promise. If an explanation is not faithful to the model, that is, it does not consistently represent the machine's behavior, the user may not leverage the explanation to agree or disagree with the machine's output; recent work on this point shows a concerning divergence in explanations Neely et al. (2022). It should also be added that not all data types neatly fall into the categories of structured and unstructured data. Textual data, for example, contains both tokens that have intrinsic meaning and tokens that have only contextual meaning, and therefore sits somewhat in the middle. Time series data is in a similar spot, exhibiting sequences whose single values may have defined meaning but whose evolution over time is harder to grasp. In these cases explanatory methods should be employed with caution and awareness of potential semantic mismatch. In the same vein, it should be recognized that, before the advent of neural networks that could process raw data, there was a long tradition of image processing by hand-crafting meaningful image features (see for example Street et al. (1993)). In essence, such approaches built features with an intrinsic semantic match by encoding expert knowledge with feature engineering, turning unstructured data into structured one. These approaches may be rediscovered as ways to attribute meaning to explanations in consultation with domain experts. Finally, semantic match is a user-dependent concept: while users with the relevant background may correctly interpret an explanation, others may not, and in healthcare settings it is crucial to clearly identify user groups and provide proper training.
Beside operationalizing the procedure we introduced in the previous section, future work aiming to obviate problems deriving from failed semantic match could direct attention to generating explanations that comply with human categories by design, possibly even with special categories employed by the targeted user. One direction of future research explores the possibility to capture information on machine behavior in a symbolic manner by means of hybrid models Sarker et al. (2021). Another option might contemplate combining visual explanation with image segmentation, to fix the semantics of entities used in the explanation. Finally, more elaborate explanations may require access to ontologies regulating the relationship between entities (as in _e.g._Lecue and Wu (2018), Liartis et al. (2021)), which should themselves match ontologies used - more or less implicitly - by humans.
When it comes to medical AI, we want clinicians to be able to interact with machines in a meaningful way, namely with the right tools to adjudicate when the machine's advice is worth following. Framing the problem in terms of semantic match helps shedding light on the issue that explanations are still too ambiguous and too far from clinicians' reasoning. We should be building explanations in the clinician's language, rather than asking clinicians to rely on intuition or to learn to think like a computer scientist.
| 最近の医療用人工知能 (AI) ツールの急増は、この技術の採用に関する議論を再燃させています。その議論の一つのテーマは、説明可能AI (XAI) と、AIデバイスをより透明で信頼できるものにするという約束です。医療AIの分野で活動する数人の中には、説明可能AIの信頼性と特に特徴 atribución メソッドについての懸念を示しています。これらの懸念は、その使用とガイドラインと標準に含めることについて疑問を呈しています。正当な懸念にもかかわらず、私たちは、ポスト hoc の局所説明可能性方法の可行性に関する既存の批判を、画像データに特化した問題を一般化することによって、この問題を捨てるという主張を主張しています。問題を説明可能性と人間理解の間の言語的整合性の欠如として特徴付けています。信頼できる特徴重要度の理解のために、低級と高階の特徴の区別を |
2307.06168 | A comparative study of different approaches for heavy quark energy loss,
based on the latest experimental data | This paper presents a comparative analysis of three distinct methods used to
calculate the collisional energy loss of heavy quarks in Quark-Gluon Plasma.
The study focuses on the calculation of the nuclear suppression factor of charm
quarks in Pb-Pb collisions at $\sqrt{S_{NN}} = 5.02$ TeV. All three models are
examined using the same numerical evolution based on the well-known
Fokker-Planck equation by considering critical phenomena like a non-equilibrium
state at the onset of heavy ion collision. The outcomes of each approach are
compared with the latest data from ALICE and ATLAS experiments spanning from
2018 to 2022. This study aims to compare the degree of agreement between each
approach and recently obtained experimental data, in the intermediate and high
$P_T$ regions. | Marjan Rahimi Nezhad, Fatemeh Taghavi Shahri, Sharareh Mehrabi Pari, Kurosh Javidan | 2023-07-12T13:50:10 | http://arxiv.org/abs/2307.06168v3 | A comparative study of different approaches for heavy quark energy loss, based on the latest experimental data
###### Abstract
This paper presents a comparative analysis of three distinct methods used to calculate the collisional energy loss of heavy quarks in Quark-Gluon Plasma. The study focuses on the calculation of the nuclear suppression factor of charm quarks in Pb-Pb collisions at \(\sqrt{S_{NN}}=5.02\) TeV. All three models are examined using the same numerical evolution based on the well-known Fokker-Planck equation by considering critical phenomena like a non-equilibrium state at the onset of heavy ion collision. The outcomes of each approach are compared with the latest data from ALICE and ATLAS experiments spanning from 2018 to 2022. This study aims to compare the degree of agreement between each approach and recently obtained experimental data, in the intermediate and high \(P_{T}\) regions.
###### Contents
* I Introduction
* II Methods
* II.1 System evolution
* II.2 Energy loss approaches
* II.3 Nuclear modification factor
* III Results and Discussion
* IV Summary and Conclusions
###### Abstract
The study of the quark-gluon plasma properties of the quark-gluon plasma in the quark-gluon plasma is studied in the framework of the quark-gluon plasma. The quark-gluon plasma properties are very important in understanding the evolution of the early universe. Furthermore, matter in heavy cosmic bodies like neutron stars may exist in this state due to its extremely dense composition. Hence, this is a significant issue in Astrophysics as well [5].
## I Introduction
Interaction between quarks and gluons is described by Quantum Chromo Dynamics (QCD) theory, in which quarks act as constituents of hadrons, and gluons act as quantum bosons [1]. Two prominent features of QCD are asymptotic freedom and confinement. Asymptotic freedom means that the interaction between quarks is weak when they are close to each other, but as the quarks move away from each other, this force becomes stronger and increases, which is called confinement. As a result of the asymptotic freedom, when matter reaches extremely high temperatures and/or densities, the strong interaction weakens, and quarks and gluons are freed from each other. In other words, at high temperatures and/or densities hadrons will melt and the degrees of freedom of matter will be quarks and gluons. In this state, a fluid called Quark-Gluon Plasma (QGP) is formed [2; 3; 4].
Studies indicate that matter was in the plasma phase until a few microseconds after the Big Bang and then a phase transition occurred. Therefore, studying the transition of the quark phase to the hadron phase and investigating the quark-gluon plasma properties are very important in understanding the evolution of the early universe. Furthermore, matter in heavy cosmic bodies like neutron stars may exist in this state due to its extremely dense composition. Hence, this is a significant issue in Astrophysics as well [5].
QGP was first theorized in the 1970s, but experiments at the Relativistic Heavy Ion Collider (RHIC) and later at the Large Hadron Collider (LHC) confirmed the existence of QGP in the late 1990s. These experiments involve colliding heavy ions, such as gold or lead, at very high energies, creating a hot and dense environment that allows quarks and gluons to move freely and interact strongly with each other [6; 7]. This highly excited state of matter whose main constituents are light quarks and gluons displays properties similar to a nearly perfect fluid and can be successfully described by hydrodynamic models [8; 9; 10]. Two different conditions are needed to describe the QGP by hydrodynamics; the first one is that the system should have a local thermal equilibrium for a sufficient period of time, and the second one is that the scale of interactions (or mean free distance of particles) should be much smaller than the dimensions of the system. Hydrodynamics
could be considered a macroscopic effective field theory that has the ability to investigate the evolution of non-equilibrium systems.
Heavy quarks such as b and c quarks play an essential role in studying the properties of Quark-Gluon Plasma created in heavy-ion collisions [11; 12]. These quarks are formed in the early stages of the collision. Due to their large mass, they reach equilibrium with the environment later and may even leave the plasma without reaching equilibrium. So they are good witnesses of the whole space-time history of the deconfined medium. In order to study the evolution of heavy quarks in QGP, a possible approach is to examine the time evolution of their distribution functions in the transverse momentum plane. It is reasonable to assume that these heavy particles in a non-equilibrium state undergo Brownian motion within a heat bath that is in thermodynamic equilibrium. The Fokker-Planck equation can be used to obtain the temporal evolution of the transverse momentum spectrum of heavy quarks. When heavy quarks pass through the plasma, they interact with the QGP constituents and lose energy through radiation and elastic collisions. The energy loss of these quarks provides information about the properties of the QGP, such as its temperature and viscosity. In addition, the study of heavy quark energy loss is important for understanding the mechanism of jet quenching, which is the suppression of high-energy partons in the QGP.
In this article, we are going to investigate the evolution of the charm quark distribution function in Pb-Pb collision at \(\sqrt{S_{NN}}=5.02\) TeV. In our evolution process, we consider different approaches for collisional energy loss, along with radiation energy loss. Eventually, by calculating the nuclear modification factor, \(R_{AA}\), we are able to compare theoretical results with the most recent experimental data from LHC, in order to determine which method of energy dissipation is most compatible with new experimental data. This paper is organized as follows: In section (II), we review the Fokker-Planck equation and the evolution of the QGP system. We also introduce different methods of energy loss that we have examined. Section (III) is focused on calculating the nuclear suppression factor and presenting our theoretical results. Our \(R_{AA}\) results for each energy loss model are compared with new data from ATLAS and ALICE. Finally, the conclusion is given in Section (IV).
## II Methods
In this section, we describe the details of our modeling framework. To begin with, it would be helpful to review the stages of quark-gluon plasma formation. Heavy-ion collisions pass through various stages from collision to hadronization. The collision initially produces a fireball of
quarks and gluons known as quark-gluon plasma. After a while, the system quickly reaches local thermodynamic equilibrium, and high-energy partons lose energy through passing the plasma. As the system continues to expand and cool, all interactions stop and the system reaches the freeze-out temperature (\(T_{f}\)). In this state, dynamic information remains constant and hadrons are formed.
### System evolution
To study the QGP system, various models could be used to calculate the time evolution of dynamic parameters such as temperature and viscosity. Here we consider the time dependence of temperature as follows [13; 14]:
\[T(\tau)=T_{0}\left(\frac{\tau_{0}}{\tau}\right)^{\frac{1}{3}}\left[1+\frac{2}{ 3\tau_{0}T_{0}}\frac{\eta}{s}\left(1-\left(\frac{\tau_{0}}{\tau}\right)^{\frac {2}{3}}\right)\right] \tag{1}\]
where \(T_{0}\) and \(\tau_{0}\) are the initial temperature and proper time and \(\frac{\eta}{s}\) is the viscosity to entropy ratio which has been taken from [15; 16; 17; 18].
Also using a temperature-dependent function for the running coupling \(\alpha_{s}(T)\) is essential because the temperature is a critical scale that controls the QCD coupling in the QGP system [19]:
\[\alpha_{s}(T)=\frac{6\pi}{(33-2N_{f})\ln\left(\frac{19T}{\Lambda_{MS}}\right)} \tag{2}\]
where we assume \(N_{f}=3\) as the number of active flavors in the QGP and the QCD cut-off parameter has been taken as \(\Lambda_{MS}\)= 80 MeV.
The heavy quark evolution in the QGP system can be described by two different approaches: the Langevin transport equation and the Fokker-Planck equation. In this article, we will employ the Fokker-Planck equation which is a simplified form of the Boltzmann equation. The Fokker-Planck equation provides a suitable framework for investigating the temporal evolution of heavy quarks [20]. This equation was first introduced by Fokker and Planck to explain the Brownian motion of particles in a fluid. According to this equation, the temporal evolution of the distribution function of heavy quarks is given by:
\[\frac{\partial}{\partial t}f(p,t)=-\frac{\partial}{\partial p}\left[A(p)f(p,t) \right]+\frac{\partial^{2}}{\partial p^{2}}\left[D(p)f(p,t)\right]. \tag{3}\]
To solve this equation, we require three input parameters:
The initial distribution function of heavy quarks (\(f_{in}(p,t)\)), drag coefficient (\(A(p)\)), and diffusion coefficient (\(D(p)\)).
To derive an analytical solution for the Fokker-Planck equation, we assume that the drag and diffusion coefficients are momentum-independent. This assumption is reasonable since the time dependence of these coefficients arises from temperature fluctuations, which are time-dependent. The drag and diffusion coefficients are determined by the non-equilibrium energy dissipation of particles in a thermal environment. The energy dissipation of heavy quarks in a plasma environment occurs through two main processes: (1) collisions with other particles and (2) gluon bremsstrahlung or radiation due to interactions of heavy quarks with other quarks, anti-quarks, and gluons present in the thermal bath.
Therefore, the drag coefficient can be obtained using the following relation [21]:
\[A(p)=-\frac{1}{p}\frac{dE}{dL} \tag{4}\]
while we consider energy loss in both cases:
\[\frac{dE}{dL}=(\frac{dE}{dL})_{coll}+k(\frac{dE}{dL})_{rad} \tag{5}\]
The value of \(k\) is uncertain and needs to be determined through an optimization process. This value shows the impact of radiation term on the results.
The diffusion coefficient can be determined using Einstein's relation when there is a weak coupling between the heavy quark and the thermal bath [21, 22, 23]:
\[D(p)=TA(p)E \tag{6}\]
\(T\) represents the temperature of the thermal bath and \(E\) represents the energy of heavy quarks.
Note that the drag coefficient carries information about the dynamics of heavy quarks collisions with the medium and is expected to be determined by the properties of the thermal bath. Therefore, the most critical point for finding the time evolution of the HQ distribution function is calculating the drag force acting on the HQ or the corresponding rate of energy loss per unit distance of the HQ path in QGP. One should use a gauge invariant field theory that does not have any infrared divergence to properly account for thermal effects and obtain accurate outcomes for these values. By calculating energy loss, we have all parameters to solve the FP equation and study HQ evolution from the onset of plasma formation to reaching the critical temperature and hadronization.
### Energy loss approaches
The asymptotic freedom of QCD implies that, for a quark-gluon plasma at a sufficiently high temperature, the rate of energy loss \(dE/dx\) can be calculated using perturbation theory based on the running coupling constant \(\alpha_{s}(T)\). Unfortunately, it is not possible to compute \(dE/dx\) directly by evaluating the tree-level Feynman diagrams for scattering off of thermal quarks and gluons in the plasma. There are different divergences due to the long-range interactions mediated by the gluon. Indeed, gluon exchange diagrams give rise to logarithmically infrared divergent integrals over the momentum transfer \(q\) of the gluon.
In this study, we compare three different approaches to calculate collisional energy loss for heavy quarks in the QGP. Each of these approaches has addressed the divergence problem in its own way. The Fokker-Planck equation is employed to investigate each approach to obtain the evolution of HQ distribution functions from the time of plasma equilibration to the hadronization. The \(R_{AA}\) plot is utilized to compare the degree of agreement of each approach with the latest experimental data.
The first approach for collisional energy loss (Model A) has been calculated by Bjorken [24]. It is indeed the first calculation of the heavy quark energy loss due to QGP-HQ interaction. He calculated the energy loss of a massless quark due to elastic scattering off of the QGP constituents by averaging the cross section multiplied by the mean energy transfer over the thermal distribution. Infrared divergences were cut off by hand at a reasonable scale.
The second approach (Model B), proposed by Thoma and Gyulassy [25], combines techniques of plasma physics with high-temperature QCD [26] in order to calculate collisional energy loss. Through this method, \(dE/dx\) is computed using the induced chromoelectric field in the wake of a high-energy quark. That reduced field is related to the longitudinal and transverse dielectric functions, which in turn, can be expressed in terms of the gluon self-energy. An advantage of this approach is its ability to automatically regulate infrared singularities through the Debye mass.
The last approach studied in this research (Model C) is proposed by Braaten and Thoma [27], which includes calculating the energy loss of a quark with energy E in two different limits: \(E\ll\frac{M^{2}}{T}\:\text{and}\:E\gg\frac{M^{2}}{T}\). In this method, soft and hard contributions to the energy loss are calculated separately and added together. This approach utilizes the hard-thermal loop (HTL) framework [19; 27].
The radiative energy loss of a quark has been calculated using the proposed model in Ref. [28]. This formalism has been constructed by considering the reaction operator formalism (DGLV) and
employing the generalized dead cone approach [29; 30]. The DGLV approach [31; 32; 33] relies on expanding the quark energy loss based on the number of scatterings encountered by the quark as it moves through the medium. The single hard scattering limit considers only the leading order term. See the appendix for more details.
### Nuclear modification factor
Quark-gluon plasma formation cannot be directly observed in the laboratory because the formed matter, quickly cools down and has a very short lifetime (on the order of \(10^{-23}\) seconds). What is observed and recorded by detectors are only photons, leptons, and stable final hadrons. Therefore, we need measurable quantities that are dependent on the characteristics of the initial stages of the system to obtain information about the early stages of plasma formation.
Here, we introduce one of the most important signals of plasma formation which is the nuclear suppression factor (\(R_{AA}\)). This quantity represents the ratio of the number of electrons produced from semi-leptonic decay of mesons per unit rapidity and transverse momentum in nuclear-nuclear collisions to the same value in the proton-proton collisions [34; 35; 36]:
\[R_{AA}(p_{T})=\frac{\left(\frac{dN^{e}}{dP_{T}^{2}dy}\right)^{A-A}}{N_{coll} \times\left(\frac{dN^{e}}{dP_{T}^{2}dy}\right)^{p-p}} \tag{7}\]
The term "\(N_{coll}\)" in the denominator represents the number of nucleon-nucleon collisions in nucleus-nucleus collisions and can be estimated via Glauber model calculations [37].
The nuclear modification factor quantifies the amount of energy lost during nucleus collisions due to heavy quarks' transportation in the partonic medium. When there is no creation of the quark-gluon plasma, the nuclear modification factor is equal to one, which signifies the non-existence of a novel medium. However, if the value of this factor is less than one, it reveals the interaction between high-energy jets and the thermal environment formed during the collision of energetic nuclei.
It should be noted that what is observed in detectors are electrons produced from the decay of D and B mesons. Therefore, to obtain more precise results, the corresponding hadron distribution functions can be derived by utilizing a suitable fragmentation function on the output of the FP equation. Although, the application of the fragmentation function to the final result has a negligible impact and can be ignored [38; 39]. In this work, the partonic distribution functions are
directly divided by each other to calculate the nuclear suppression factor. Adding a scaled factor would result in the following outcome: \(R_{AA}=\frac{1}{N}\frac{f_{T}(P_{T})^{A-A}}{f_{i}(P_{T})^{P-P}}\)
## III Results and discussion
In this section, we calculate the nuclear suppression factor of the charm quark in a Pb-Pb collision at a center-of-mass energy of 5.02 TeV. The parton distribution functions required to perform this calculation are evaluated using the Fokker-Planck (FP) equation. FP equation is solved numerically at third-order relativistic hydrodynamics [40] until the fireball cools down to its freeze-out temperature. The evolution of HQ distributions is calculated from initial proper time \(\tau_{0}=0.33~{}fm/c\) to thermal freeze-out \(T_{c}=155\) Mev for LHC [41; 42]. The initial transverse momentum distribution of c quark is obtained from [43; 18; 44].
To compute drag and diffusion coefficients, besides considering radiation energy loss, we employ three distinct approaches which previously introduced for evaluating collisional energy loss. The outcomes of each approach are compared with the most recent data from ALICE and ATLAS in 2018, 2021 and 2022 [45; 46; 47; 48]. Our purpose is to determine which approach is most consistent with experimental results.
The final result is optimized to fit on experimental data by adjusting initial parameters such as k and N, and minimizing the unweighted Chi-squared value:
\[\chi^{2}=\sum_{i}\frac{(R_{AA}^{exp}(P_{T}(i))-R_{AA}^{th}(P_{T}(i))^{2}}{ \sigma_{i}^{2}} \tag{8}\]
\(R_{AA}^{exp}\) and \(R_{AA}^{th}\) are experimental and theoretical predictions for suppression factors, respectively, and \(\sigma\) is related to experimental error.
We use the Minuit package [49] for our parameter optimization process, which is a powerful tool that enables us to achieve high accuracy in minimizing chi-squared values.
We present our results for the nuclear modification factor in Fig.(1) to Fig.(4) over a wide range of \(P_{T}\) for all the three energy loss approaches. These results are compared with ATLAS 2022 data and ALICE data in 2018, 2021, and 2022. In addition, we repeat our fitting procedure for the momentum interval \(2<P_{T}<12\), as shown in Fig.(5). This range, known as the intermediate \(P_{T}\) range, is of particular interest in the study of heavy-ion collisions as it encompasses a transition region between low \(P_{T}\) and high \(P_{T}\). This range of \(P_{T}\) enables us to explore the interaction between high-energy scattering events and the collective properties of the QGP.
In tables (1) to (IV), we summarize the obtained results for the free parameters, \(N\) and \(K\), as well as the \(\chi^{2}\) values for each model. Note that the charm quark distribution has been computed for \(P_{T}>1~{}GeV\). Therefore, our results within \(P_{T}<1~{}GeV\) region are invalid. To avoid errors resulting from unreliable regions, we calculate the \(\chi^{2}\) value for data with \(P_{T}>1.5~{}GeV\).
As can be seen from the charts, in general, all three energy loss approaches agree well with experimental data from ALICE and ATLAS. Although they describe intermediate \(P_{T}\) values better than large \(P_{T}\) and there is a slight deviation from the experimental data observed for high \(P_{T}\). That's because all models face divergence at the limits of integration and different approaches have used different methods to resolve this issue. It is anticipated that the introduction of novel methods capable of effectively resolving the divergence associated with the upper and lower limits of interaction would yield improved outcomes for both small and large \(P_{T}\) values.
Figure (1) corresponds to the ALICE 2018 dataset, and Table (1) presents the fitting outcomes of these data for the three collisional energy loss models. As \(\chi^{2}\) values show, for the 2018 dataset, there is no significant difference observed among the three approaches. This lack of differentiation can be attributed to the limited number of data points available in 2018, coupled with their relatively high error margin (averaging 0.2). Hence, these data lack sufficient resolution to show
Figure 1: Nuclear modification factor of charm quark at 5.02 TeV compared with ALICE 2018 data [45].
the difference between the models and are not a good criterion for concluding.
In the Alice 2021 and 2022 experiments, the number of data points has increased, and their errors have decreased. Consequently, these datasets provide a more reliable reference for investigating and comparing different energy loss models. In the ALICE 2021 experiment, the measured data is related to muons decaying from D meson (\(Pb+Pb\to Muon+X\)), while in the ALICE 2022 experiment, the interaction is related to D meson production (\(Pb+Pb\to D0+X\)). Consequently, the ALICE 2021 data has a smaller error compared to the ALICE 2022 data (the average error in the ALICE 2022 data is approximately 0.1, whereas the average error in the ALICE 2021 data is roughly half of that value). However, Alice's 2022 data cover a broader range of \(P_{T}\) values. Therefore, they are more suitable for globally comparing energy loss models. Conversely, for the intermediate range of \(P_{T}\) values, the 2021 data are a better reference due to their smaller error.
Figure (2) presents the results obtained for the ALICE 2021 data. As it is evident from Table (2) and the \(\chi^{2}\) values, model C performs better than other models in the range of intermediate \(P_{T}\) values. This indicates that the HTL mechanism effectively describes the intermediate \(P_{T}\) range, suggesting that model C is a more suitable choice for this region. Furthermore, model B outperforms model A as expected.
Figure 2: Nuclear modification factor of charm quark at 5.02 TeV compared with ALICE 2021 data [46].
Figure (3) illustrates the obtained results for the ALICE 2022 data. The ALICE 2022 data, covering a wider range of \(P_{T}\) values, are more suitable for global analysis and comparison of different approaches. According to Table (3), for all \(P_{T}\) ranges, Model B performs slightly better than other models. However, in general, there is not a significant difference between the three energy loss models in the global analysis. We need more data points with higher \(P_{T}\) values to make a definite conclusion. The fitting outcomes obtained from the ALICE 2022 dataset do not indicate a global advantage of model C. Note that in our analysis, we have considered the range of \(2<P_{T}<12\) as intermediate \(P_{T}\)s, while in model C [27] the boundary between soft momentum and hard momentum occurs around \(P_{T}=20\) GeV. Consequently, we should note that in the review of model C, most of the available data are evaluated using the HTL mechanism. To obtain better results for a global investigation of model C, the boundary for these two regions may need to be modified at higher center-of-mass energies.
Finally, the ATLAS 2022 data were analyzed to compare with the ALICE dat; Figure (4). However, the limited number of ATLAS data points and their higher error compared to the ALICE 2021 data make it difficult to differentiate between the three methods of energy loss. Nonetheless, as seen in Table (4), in this case as well, all the models provide better de
Figure 3: Nuclear modification factor of charm quark at 5.02 TeV compared with ALICE 2022 data [47].
scriptions of the data in the range of intermediate \(P_{T}\) values in comparison by the whole \(P_{T}\) region.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Approach** & Range of \(P_{T}\) & \(\chi^{2}\) & **N** & **k** \\ \hline \multirow{2}{*}{Model A} & All \(P_{T}\) & 0.86 & 0.74 & 1.2 \\ & \(2<Pt<12\) & 0.32 & 0.75 & 1.01 \\ \hline \multirow{2}{*}{Model B} & All \(P_{T}\) & 0.81 & 0.75 & 1 \\ & \(2<Pt<12\) & 0.35 & 0.72 & 1 \\ \hline \multirow{2}{*}{Model C} & All \(P_{T}\) & 0.82 & 0.72 & 1.4 \\ & \(2<Pt<12\) & 0.40 & 0.50 & 2.3 \\ \hline \end{tabular}
\end{table}
Table 1: The best fitting results for different approaches for ALICE 2018 data
Figure 4: Nuclear modification factor of charm quark at 5.02 TeV compared with ATLAS 2022 data [48].
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Approach** & Range of \(P_{T}\) & \(\chi^{2}\) & **N** & **k** \\ \hline \multirow{2}{*}{Model A} & All \(P_{T}\) & 2.2 & 1.15 & 1 \\ & \(2<Pt<12\) & 2.5 & 1.08 & 1.1 \\ \hline \multirow{2}{*}{Model B} & All \(P_{T}\) & 1.7 & 1.06 & 1 \\ & \(2<Pt<12\) & 1.9 & 1.01 & 1.1 \\ \hline \multirow{2}{*}{Model C} & All \(P_{T}\) & 0.67 & 1.16 & 1 \\ & \(2<Pt<12\) & 0.76 & 1.15 & 1 \\ \hline \end{tabular}
\end{table}
Table 2: The best fitting results for different approaches for ALICE 2021 data
Figure 5: Nuclear modification factor of charm quark at 5.02 TeV fitted on experimental data for the intermediate range of \(P_{T}\).
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Approach** & Range of \(P_{T}\) & \(\chi^{2}\) & **N** & **k** \\ \hline Model A & \begin{tabular}{c} All \(P_{T}\) \\ \(2<Pt<12\) \\ \end{tabular} & \begin{tabular}{c} 0.25 \\ 0.11 \\ \end{tabular} & \begin{tabular}{c} 0.85 \\ 0.74 \\ \end{tabular} & \begin{tabular}{c} 1 \\ 1.22 \\ \end{tabular} \\ \hline Model B & \begin{tabular}{c} All \(P_{T}\) \\ \(2<Pt<12\) \\ \end{tabular} & \begin{tabular}{c} 0.24 \\ 0.12 \\ \end{tabular} & \begin{tabular}{c} 0.78 \\ 0.70 \\ \end{tabular} & \begin{tabular}{c} 1 \\ 1.19 \\ \end{tabular} \\ \hline Model C & \begin{tabular}{c} All \(P_{T}\) \\ \(2<Pt<12\) \\ \end{tabular} & \begin{tabular}{c} 0.19 \\ 0.17 \\ \end{tabular} & \begin{tabular}{c} 0.8 \\ 0.66 \\ \end{tabular} &
\begin{tabular}{c} 1.07 \\ 1.56 \\ \end{tabular} \\ \hline \end{tabular}
\end{table}
Table 4: The best fitting results for different approaches for ATLAS 2022 data
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Approach** & Range of \(P_{T}\) & \(\chi^{2}\) & **N** & **k** \\ \hline Model A & \begin{tabular}{c} All \(P_{T}\) \\ \(2<Pt<12\) \\ \end{tabular} & \begin{tabular}{c} 1.06 \\ 0.73 \\ \end{tabular} & \begin{tabular}{c} 1.18 \\ 0.55 \\ \end{tabular} & \begin{tabular}{c} 1 \\ 3.3 \\ \end{tabular} \\ \hline Model B & \begin{tabular}{c} All \(P_{T}\) \\ \(2<Pt<12\) \\ \end{tabular} & \begin{tabular}{c} 0.88 \\ 0.52 \\ 0.52 \\ \end{tabular} & \begin{tabular}{c} 1.11 \\ 1 \\ 1 \\ \end{tabular} & \begin{tabular}{c} 1 \\ 1 \\ 1 \\ \end{tabular} \\ \hline Model C & \begin{tabular}{c} All \(P_{T}\) \\ \(2<Pt<12\) \\ \end{tabular} & \begin{tabular}{c} 0.97 \\ 0.24 \\ 0.12 \\ \end{tabular} & \begin{tabular}{c} 0.98 \\ 1.2 \\ 1.08 \\ \end{tabular} &
\begin{tabular}{c} 1.63 \\ 1.08 \\ \end{tabular} \\ \hline \end{tabular}
\end{table}
Table 3: The best fitting results for different approaches for ALICE 2022 data
Summary and Conclusions
Our study employed the Fokker-Planck equation to investigate the evolution of transverse momentum distribution functions of charm quarks produced in lead-lead collisions at 5.02 TeV. During the evolution, we considered three different approaches for collisional energy loss to compare these approaches with each other. Our purpose was to assess their compatibility with the latest experimental data. We have found that the recent data have sufficient precision to distinguish among these different models, particularly in the region of intermediate transverse momentum. Although published data older than 2018, do not exhibit significant differences between the various energy loss models, mainly due to their limited quantity and large errors. In general, all three energy loss approaches describe the range of intermediate \(P_{T}\) better than small or large \(P_{T}\) regions. Among these models, the model proposed by Braaten and Thoma [27] provides a better description of the intermediate \(P_{T}\) range, in comparison with the other energy dissipation methods. indicating that the HTL mechanism is an appropriate mechanism for average \(P_{T}\)s. For the global analysis, we considered ALICE 2022 data as a benchmark because it covers a wider region of \(P_{T}\). There was no significant difference between the \(\chi^{2}\) values for different energy loss models. In fact, the major difference between these models lies in managing the convergence at high and low momenta and the method of field regularization. Therefore, by increasing the number of data points for small and large \(P_{T}\)s, and decreasing their error, we should be able to distinguish between these energy loss models more effectively for global analysis. Also, It is expected that such evolution will be employed in the near future for the b quark distribution function. At present, the available data on the b quark distribution function are insufficient to determine the validity of a proper model.
###### Acknowledgements.
Special thanks go to Dr. Samira Shoeibi for providing guidance in using the Minuit package. This work is supported by the Ferdowsi University of Mashhad under grant numbers 3/58322 (1401/07/23).
## Appendix
As mentioned before, in order to calculate the drag and diffusion coefficients in the Fokker-Planck equation, we must calculate the energy loss of heavy quarks while passing through the plasma and consider both modes of energy loss through collisions and radiation. Here, we introduce three common approaches for calculating collisional energy loss, as well as one of the most common approaches to calculating radiant energy loss.
The first calculation of collision energy loss (Model A in our article) is proposed by Bjorken [24] which is :
\[-\frac{dE}{dx}=\frac{16\pi}{9}\alpha_{s}^{2}T^{2}\ln\left(\frac{4 pT}{k_{D}^{2}}\right)\left[\exp\left(-\frac{k_{D}}{T}\right)\left(1+\frac{k_{D} }{T}\right)\right] \tag{9}\]
P is the momentum of the particle, T is the temperature of the plasma, and \(k_{D}=\sqrt{3}m_{g}\). We also have:
\[m_{g}^{2}=\frac{4\pi\alpha_{s}T^{2}}{3}(1+\frac{n_{f}}{6}) \tag{10}\]
Another approach for calculating collision energy loss is presented by Thoma and Gyulassy [25]. Through this approach which is our second model, we have:
\[-\frac{dE}{dx}=\frac{16\pi}{9}\alpha_{s}^{2}T^{2}\ln\left(\frac{k_{\max}}{k_{ D}}\right)\frac{1}{\nu^{2}}\left[\nu+\frac{\left(\nu^{2}-1\right)}{2}\ln\left( \frac{1+\nu}{1-\nu}\right)\right] \tag{11}\]
In which:
\[k_{\max}\approx\frac{4pT}{\sqrt{p^{2}+M^{2}}-p+4T} \tag{12}\]
Model C for collisional energy loss [27] involved calculating the energy loss of a quark with energy E in two different limits: \(E\ll\frac{M^{2}}{T}\) and \(E\gg\frac{M^{2}}{T}\)
A QED calculation has been used to determine contributions to the energy loss for some parts of the calculation. In order to achieve this, "e" in the QED calculations will be replaced by the \(g_{s}=\frac{4}{3}\sqrt{4\pi\alpha_{s}}\) in the QCD calculations. The thermal photon mass \(m=eT/3\) is also replaced by the thermal gluon mass which is \(m_{g}=g_{s}T\sqrt{\frac{1+n_{f}/6}{3}}\)
So for the \(E\ll\frac{M^{2}}{T}\) limit we will have:
\[-\frac{dE}{dx}=\frac{8\pi\alpha_{s}^{2}T^{2}}{3}(1+\frac{n_{f}}{6})\left[ \frac{1}{v}-\frac{1-v^{2}}{2v^{2}}\ln\left(\frac{1+v}{1-v}\right)\right]\ln \left(\frac{2^{n_{f}/(6+n_{f})}B(v)ET}{m_{g}M}\right) \tag{13}\]
\(B(v)\) is a smooth function that starts at \(B(0)=0.604\), increases to \(B(0.88)=0.731\), and then decreases to \(B(1)=0.629\).
And in the \(E\gg\frac{M^{2}}{T}\) limit, we have:
\[-\frac{dE}{dx}=\frac{8\pi\alpha_{s}^{2}T^{2}}{3}(1+\frac{n_{f}}{6})\ln\left(2^ {\frac{n_{f}}{12+2n_{f}}}\,0.920\frac{\sqrt{ET}}{m_{g}}\right) \tag{14}\]
A smooth connection between two limits is required for the intermediate region, E \(\approx M^{2}/T\). Calculations indicate that we can use the first equation up to \(E_{cross}=1.8M^{2}/T\) and then switch to the second one.
Also, the radiative energy loss of a heavy quark in a QGP is calculated as follows:
\[-\frac{dE}{dx}=24\alpha_{s}^{3}\rho_{\rm QGP}\frac{1}{\mu_{g}}(1-\beta_{1}) \left(\sqrt{\frac{1}{1-\beta_{1}}\ln\frac{1}{\beta_{1}}}-1\right)F(\delta) \tag{15}\]
\[F(\delta)=2\delta-\frac{1}{2}\ln\left(\frac{1+\frac{M^{2}}{s}e^{2\delta}}{1+ \frac{M^{2}}{s}e^{-2\delta}}\right)-\left(\frac{\frac{M^{2}}{s}\sinh(2\delta) }{1+2\frac{M^{2}}{s}\cosh(2\delta)+\frac{M^{4}}{s^{2}}}\right) \tag{16}\]
\[\delta=\frac{1}{2}\ln\left[\frac{1}{1-\beta_{1}}\ln\left(\frac{1}{\beta_{1}} \right)\left(1+\sqrt{1-\frac{1-\beta_{1}}{\ln(1/\beta_{1})}}\right)^{2}\right] \tag{17}\]
\[C=\frac{3}{2}-\frac{M^{2}}{48E^{2}T^{2}\beta_{0}}\ln\left[\frac{M^{2}+6ET(1+ \beta_{0})}{M^{2}+6ET(1-\beta_{0})}\right] \tag{18}\]
for more details see [28]. | この論文では、重色子質量素子における衝突エネルギー損失を計算するために用いられる3つの異なる方法を比較分析しています。この研究は、$\sqrt{S_{NN}} = 5.02$ TeVのPb-Pb衝突におけるチャームクォークの核抑圧因子を計算することに焦点を当てています。これらの3つのモデルは、重粒子衝突の開始における非平衡状態のような臨界現象を考慮しながら、同等の数値演算に基づいて検討されています。Fokker-Planck方程式を用いて計算された結果を、ALICEとATLAS実験の最新データ(2018年から2022年)と比較します。この研究では、各アプローチの達成度を、中間領域と高$P_T$領域における最新の測定データと比較することを目的としています。 |
2306.09172 | Action Sensitivity Learning for the Ego4D Episodic Memory Challenge 2023 | This report presents ReLER submission to two tracks in the Ego4D Episodic
Memory Benchmark in CVPR 2023, including Natural Language Queries and Moment
Queries. This solution inherits from our proposed Action Sensitivity Learning
framework (ASL) to better capture discrepant information of frames. Further, we
incorporate a series of stronger video features and fusion strategies. Our
method achieves an average mAP of 29.34, ranking 1st in Moment Queries
Challenge, and garners 19.79 mean R1, ranking 2nd in Natural Language Queries
Challenge. Our code will be released. | Jiayi Shao, Xiaohan Wang, Ruijie Quan, Yi Yang | 2023-06-15T14:50:17 | http://arxiv.org/abs/2306.09172v2 | # Action Sensitivity Learning for the Ego4D Episodic Memory Challenge 2023
###### Abstract
In this report, we present ReLER's submission to two tracks in the Ego4D Episodic Memory Benchmark@CVPR 2023, including Natural Language Queries and Moment Queries. This solution inherits from our proposed Action Sensitivity Learning framework (ASL) [15] to better capture discrepant information of frames. Further, we incorporate a series of stronger video features and fusion strategies. Our method achieves an average mAP of 29.34, ranking **1-st** in Moment Queries Challenge, and gamers 19.79 mean R@1, ranking **2-nd** in Natural Language Queries Challenge. Our code will be released in [https://github.com/JonnyS1226/ego4d_asl](https://github.com/JonnyS1226/ego4d_asl).
## 1 Introduction
Given an untrimmed egocentric video, the Ego4D [5] Moment Queries (MQ) Task aims to directly recognize actions from pre-defined categories and locate these action instances temporarily, while Ego4D Natural Language Queries (NLQ) task additionally incorporates a text query and seeks to locate the video segment revealing the answer to the query. These two tasks share the similarity that they both require locating the start frame and end frame, forming the time segment within the video. However, frames inside the segment are not equally valuable. We introduce action sensitivity to measure the importance of each frame. Some frames depicting the intrinsic content of actions are more sensitive to recognizing actions while others depicting the onset and offset are more sensitive to locating instances. Moreover, some transitional or blurred frames should be paid less attention to. In a word, precise MQ and NLQ solutions require capturing fine-grained information better.
To this end, we leverage an Action sensitivity Learning framework [15] based on a multi-scale transformer localization model to assess the action sensitivity of each frame. Then we utilize the generated action sensitivity to recalibrate the training process, guiding the model to focus more on sensitive frames. Combined with stronger pre-extracted video features and fusion strategies, our method outperforms all entries in Moment Queries Challenge and achieves the best performance on the test sets. Meanwhile, our method also ranked 2-nd on the public leaderboard of Natural Language Queries Challenge.
## 2 Related Work
Ego4D Episodic Memory [5] is a large-scale egocentric video benchmark. The goal of Moment Queries Challenge is similar to Temporal Action Localization task. Unlike egocentric action recognition [17], in this field, many previous works [9, 19] follow the two-stage paradigm, generating action proposals first, then classifying and localizing these proposals, while others [20, 7] are one-stage methods that detect actions directly. Our method falls into this part. Natural Language Queries Challenge can be regarded as Moment Retrieval or Video Grounding task, where many methods [21, 22] explore elaborated video-language interactions inspired by methods [18, 23] to other multi-modal downstream tasks or are retrieve moments in a query-based manner [25, 6]. Besides, strong pre-trained video representations or features are of great benefits for action understanding, e.g., CLIP [13], VideoMAE [2], EgoVLP [8], where we explore some of these in the final submission.
## 3 Methodology
### Action Sensitivity Learning
Moment Queries Challenge requires directly outputting a set of predicted action instances with categories and temporal boundaries, given a video clip. Natural Language Queries Challenge requires, given video clip and language query, outputting a temporal contiguous set of frames that answer this query. Our solutions to these two tracks all involve a novel action sensitivity learning framework [15]. The motivation behind it is that different frames have different importance to localization and classification. We introduce action sensitivity to measure the importance of each frame to sub-tasks (i.e. localization and classification). We utilize learnable class-aware Gaussian weights to model action sensitivity (\(p^{cls}\) for classification and \(p^{loc}\) for localization ). Then we apply it as loss weights of each frame
to recalibrate training process. More details can be found in [15]. For Natural Language Queries Challenge, we extend our proposed ASL framework to a multimodal version. The frameworks for two tracks are shown in Table 1. We now introduce our specific pipelines for Moment Queries Challenge and Natural Language Queries Challenge.
### Track1: Moment Queries
**Input Representations:** In this track, we use Slow-fast [3] and Omnivore [4] features provided by Ego4D. Inspired by the success of video masked autoencoders pretraining [16, 2] and egocentric video-language pretraining [8] methods, we additionally include EgoVLP [8] and InternVideo [2] features. Our method learns an MLP on each feature to project features and reduce dimension. Then these dimension-reduced features are concatenated and fed into the video encoder.
**Feature Encoder and Sub-task Heads:** For fused features, inherited from ASL [15], we exert a Transformer encoder and pyramid network to encode feature sequences into a multiscale representation. To enhance representation, in Transformer encoder we operate temporal attention and channel attention parallelly and then fuse these two outputs. Then we model the action sensitivity for classification and localization of each frame in action instances. Our solution falls into a dense method so that the feature sequences are then processed by two sub-task heads (i.e. classification head and localization head, composed of 1D temporal convolutions), to generate dense final predictions. Meanwhile, via our proposed ASL [15], we obtain the action sensitivity \(h(\bar{c})\in\mathbb{R}^{N_{f}}\) (disentangled to classification and localization sub-task: \(h(\bar{c})\rightarrow\{h^{cls}(\bar{c}),h^{loc}(\bar{c})\}\)). \(h\) is further used in training.
**Loss Function:** For classification sub-task, we employ a focal loss [10] to classify each frame, along with action sensitivity for classification \(h^{cls}\) :
\[\mathcal{L}_{cls}=\frac{1}{N_{pos}}\sum_{i}(\mathbb{1}_{in_{i}}h_{i}^{cls}( \bar{c}_{i})\mathcal{L}_{\text{focal}_{i}}+\mathbb{1}_{bg_{i}}\mathcal{L}_{ \text{focal}_{i}}) \tag{1}\]
the indicators \(\mathbb{1}_{in_{i}},\mathbb{1}_{bg_{i}}\) denote if the \(i\)-th frame is within one ground-truth action or if belongs to the background, \(N_{pos}\) represents the number of frames within action segments, while \(\bar{c}_{i}\) indicates the action category of the \(i\)-th frame.
For localization sub-task, we adopt a DIoU loss [24] applied on frames within ground-truth action instance, to regress offsets from current frames to boundaries, combined with action sensitivity for localization \(h^{loc}\):
Figure 1: The overall framework of our approach. (a) depicts our approach to Moment Queries Challenge. (b) shows our approach to Natural Language Queries Challenge. Both are based on Action Sensitivity Learning framework [15].
\[\mathcal{L}_{loc}=\frac{1}{N_{pos}}\sum_{i}(\mathbb{1}_{in_{i}}h_{i}^{loc}(\bar{c _{i}})\mathcal{L}_{\text{DIoU}_{i}}) \tag{2}\]
**Ensemble strategy:** We trained our model with different hyperparameters and scales. We ensemble these models for better performance. The ensemble strategy is: for \(i\)-th model, we get the output logits for classification \(O_{cls}^{i}\in\mathbb{R}^{T\times C}\) and logits for localization \(O_{loc}^{i}\in\mathbb{R}^{T\times 2}\). Then we do mean pooling on these logits to get the final ensembled outputs \(O_{cls}=meanpooling(\{O_{cls}^{i}\}_{i=1}^{E}),O_{loc}=meanpooling(\{O_{loc}^{i }\}_{i=1}^{E})\), \(E\) is the number of models ensembled.
### Track2: Natural Language Queries:
**Input Representations:** In this track, we also use EgoVLP [8] and InternVideo [2] features for videos. Followed by [11], we use CLIP (ViT-B/16) [13] to get the additional CLIP visual feature. All these features are projected by an MLP and concatenated in the channel dimension. For text queries, we use CLIP (ViT-L/14) [13] to extract token-wise text embeddings.
**Text and Video Encoder:** We use two unimodal encoders for video features and text features respectively. For visual features, our encoder adopts a similar design as the encoder in MQ track in 3.2. To encode text features, we utilize multiple Transformer layers consisting of a linear projection layer and self-attention layers.
**Multimodal Fusion and Sub-task Heads:** For multi-modal feature fusion, our solution is built on a stack of cross-attention layers. For each cross-attention layer, query is obtained from video features while key and value are obtained from the same text features. Natural Language Queries Challenge can also be decoupled into two sub-tasks: the first is to predict if a frame is in the final answers, the second is to localize the answer's temporal boundary. Therefore we also adopt two sub-task heads: i) a classification head outputting a binary score for each frame. ii) a localization head outputting two distances from this frame to start time and end time. The designing of heads is the same as 3.2. Since ground truths in this track do not involve action categories, we only use one-class Gaussian to model the action sensitivity of frames (also decoupled to classification and localization sub-tasks) and apply this to classification and localization losses.
**Loss Function:** Our loss function is similar to 3.2, using Focal loss to classify and DIoU loss to localize combined with learned action sensitivity. Besides, we also utilize a NCE loss [11, 12], where for a text query, we consider a frame positive if it is inside a ground-truth answer and consider a frame negative if it is background.
**Ensemble strategy:** To further enhance the performance, we ensemble our model with [14]. We re-trained NaQ with EgoVLP [8] and InternVideo [2] features, which output topK results with respective confidence scores. Our ASL-based solution also outputs topK results. We sort these predictions according to confidence scores and take the top-5 as the final results. The experimental result shows the efficacy of ensembling.
## 4 Experiments
### Implementation Details
For our solution to Moment Queries Challenge, we upsample the input video length to 1024. EgoVLP, InternVideo, Slowfast, and Omnivore features are respectively projected to 256, 128, 128, and 512 dimensions. The model embedding dimension is set to 1024. The number of attention heads is set to 16. The number of FPN layers is set to 8. We use a mini-batch size of 2, an epoch of 10 and a learning rate of \(1e^{-4}\) with cosine weight decay and 5-epoch warm-up training strategies. During inference, the initial dense predictions are compressed with SoftNMS [1] and remain 2000 final predictions for submission. When ensembling, we use 3 models (i.e. \(E=3\)).
For our solution to Natural Language Queries Challenge, EgoVLP, InternVideo, and CLIP are respectively projected to 256, 512, and 256 dimensions. In the backbone, we use
\begin{table}
\begin{tabular}{l l|c c} \hline \hline Method & Feature & Average mAP & Recall@1x, tIoU=0.5 \\ \hline Actionformer [20] & EgoVLP [8] & 20.60 & 37.12 \\ Actionformer [20] & SF + OV + EgoVLP [8] & 21.40 & 38.73 \\ InternVideo [2] & InternVideo [2] & 23.59 & 41.13 \\ \hline Ours base & EgoVLP [8] & 20.79 & 39.26 \\ Ours base & SF + OV + EgoVLP [8] & 22.02 & 40.12 \\ Ours base + ASL & EgoVLP [8] & 22.83 & 40.67 \\ Ours base + ASL & SF + OV + EgoVLP [8] & 24.15 & 41.49 \\ Ours base + ASL & InternVideo [2] + EgoVLP [8] & **27.85** & **46.98** \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Results on the val set of Moment Queries Challenge:** SF and OV denote Slowfast [3] and Omnivore [4] features. The best results are in **bold**.
\begin{table}
\begin{tabular}{c|c c} \hline \hline Entry & Average mAP & Recall@1x, tIoU=0.5 \\ \hline InternVideo [2] & 23.99 & 44.24 \\ mzs & 26.62 & 45.69 \\ asl-ego4d(ours) & 29.34 & 48.50 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Results on the test set of Moment Queries Challenge.**
16 attention heads along with 512 dimensions. In training, we use a mini-batch of 16 and a learning rate of \(1e^{-3}\) also with warm-up and cosine weight decay strategies. During inference, we sort the predictions according to confidence scores after processing with SoftNMS [1], leaving 5 predictions for final outputs and for ensembling.
All experiments are conducted on NVIDIA Tesla V100 GPU. For Moment Queries Challenge, we report mean average precision (mAP) at tIoU thresholds [0.1:0.1:0.5] and report their average, i.e., the average mAP. Following official rules, we also report Recall@1x at tIoU=0.5, where x stands for the number of ground-truth instances of an action category. For Natural Language Challenge, we report Recall@1 (R@1) and Recall@5 (R@5) at tIoU thresholds of 0.3 and 0.5. For all these two tracks, we train our model on the training set when reporting results on the validation set (Table 1 and 3) and train our model on the combination of training and validation set for final submission to the test server (Table 2 and 4).
### Results: Moment Queries
The comparison results on the validation set are shown in Table 1. Ours base means utilizing a Actionformer-like [20] model, which forward videos into multi-scale Transformers and directly predict categories and boundaries. Ours base + ASL means utilizing Action Sensitivity Learning to recalibrate training process. Using the same features of Slowfast [3], Omnivore [4] and EgoVLP [8], our methods outperforms Actionformer [20] by 2.23 of average mAP. Combining strong features of InternVideo [2] and EgoVLP [8], our methods totally gains 4.26 of average mAP and 5.85 of Recall@1x(tIoU=0.5) compared to the champion [2] in Moment Queries Challenge@ECCV 2022. Besides, our proposed Action Sensitivity Learning [15] contributes to a 2.13 improvement in average mAP, indicating efficacy. As shown in Table 2, our ensembled methods finally obtain 27.85 average mAP, surpassing last year's top-ranked method by 4.26 on average mAP and taking 1-st place on the leaderboard of this challenge. However, the improvement of Recall@1x(tIoU=0.5) is not as significant as that of average mAP.
### Results: Natural Language Queries
In the validation set of Natural Language Queries Challenge, the results are shown in Table 3. Compared to Actionformer [20] (Runner-up in last year's challenge), our method improves mean R@1 by 1.22 under fair comparison. Among all these features (Slowfast [3], Omnivore [4], EgoVLP [8], InternVideo [2]), a combination of EgoVLP and InternVideo yields the best results as these two features are extracted using large-scale in-domain pretraining models. Besides, Action Sensitivity Learning though aims to tackle the task of Temporal Action Localization, also boosts a gain of 0.73 on mean R@1. On the leaderboard of this challenge (Table 4), our ensembled methods finally arrive at 19.79 mean R@1, 2.12 higher than baseline (NaQ [14]) and ranked second.
## 5 Limitation and Discussion
Our key contribution in this solution is to capture the discrepant action sensitivity of different frames and apply the action sensitivity as weighting for losses to recalibrate training. Combined with stronger features and ensemble strategies, our method achieves good results on both Moment Queries and Natural Language Queries. As for limitations and future direction especially for NLQ challenge, our solution does not consider the context of egocentric. Meanwhile, more elaborated multimodal fusion designing may be explored while our method just use simple cross-attention layers.
\begin{table}
\begin{tabular}{c|c c c|c c} \hline \multirow{2}{*}{Entry} & \multicolumn{3}{c}{R@1(\%)} & \multicolumn{3}{c}{R@5 (\%)} \\ & IoU=0.3 & IoU=0.5 & mean & IoU=0.3 & IoU=0.5 \\ \hline NaQ [14] & 21.70 & 13.64 & 17.67 & 25.12 & 16.33 \\ ego-env & 23.28 & 14.36 & 18.82 & 27.25 & 17.58 \\ asl-nlo(ours) & 24.13 & 15.46 & 19.79 & 34.37 & 23.18 \\ GroundNLQ & 25.67 & 18.18 & 21.93 & 42.06 & 29.80 \\ \hline \end{tabular}
\end{table}
Table 4: **Results on the test set of Natural Language Queries Challenge.** The prime metric is mean R@1.
\begin{table}
\begin{tabular}{l l|c c c|c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Feature} & \multicolumn{3}{c}{R@1(\%)} & \multicolumn{3}{c}{R@5 (\%)} \\ & & IoU=0.3 & IoU=0.5 & mean & IoU=0.3 & IoU=0.5 \\ \hline
2D-TAN [22] & SF & 5.04 & 2.02 & 3.53 & 12.89 & 5.88 \\ VSLNet [21] & SF & 5.45 & 3.12 & 4.28 & 10.74 & 6.63 \\ ReLER [11] & SF + OV + CLIP [13] & 11.33 & 7.05 & 9.19 & 14.77 & 8.98 \\ Actionformer [12] & SF + OV + EgoVLP [8]+CLIP [13] & 15.72 & 10.12 & 12.92 & 34.64 & 23.64 \\ InternVideo [2] & InternVideo [16] & 15.64 & 10.17 & 12.91 & 24.78 & 18.30 \\ \hline Ours base & SF + OV + CLIP [13] & 13.80 & 9.51 & 11.66 & 35.28 & 23.13 \\ Ours base + ASL & SF + OV + CLIP [13] & 14.79 & 9.98 & 12.39 & 35.13 & 23.55 \\ Ours base + ASL & SF + OV + EgoVLP [8]+CLIP [13] & 16.93 & 11.36 & 14.14 & 35.77 & 23.49 \\ Ours base + ASL & InternVideo [16] + EgoVLP [8] & **22.62** & **15.64** & **19.13** & **46.86** & **32.16** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Results on the val set of Natural Language Queries Challenge:** SF and OV denote Slowfast [3] and Omnivore [4] features. The best results are in **bold**. The prime metric is mean R@1. | このレポートでは、Ego4D EpisodicMemory Benchmarkの2つのトラックにReLERの提出を行い、自然言語の質問と瞬間質問を含む。このソリューションは、提案した動作敏感性学習フレームワーク(ASL)を継承し、フレームの不一致情報をより効果的に捕捉する。さらに、より強いビデオ特徴と融合戦略を組み込む。この方法は、平均mAP 29.34で、瞬間質問チャレンジで1位、平均R1 19.79で、自然言語質問チャレンジで2位にランクインした。このコードは公開されます。 |
2304.08415 | Exciton band structure of V$_2$O$_5$ | Excitonic effects due to the correlation of electrons and holes in excited
states of matter dominate the optical spectra of many interesting materials.
They are usually studied in the long-wavelength limit. Here we investigate
excitons at non-vanishing momentum transfer, corresponding to shorter
wavelengths. We calculate the exciton dispersion in the prototypical layered
oxide V$_2$O$_5$ by solving the Bethe-Salpeter equation of many-body
perturbation theory. We discuss the change of excitation energy and intensity
as a function of wavevector for bright and dark excitons, respectively, and we
analyze the origin of the excitons along their dispersion. We highlight the
important role of the electron-hole exchange with its impact on the exciton
dispersion, the singlet-triplet splitting and the difference between the
imaginary part of the macroscopic dielectric function and the loss function. | Vitaly Gorelov, Lucia Reining, Matteo Gatti | 2023-04-17T16:30:20 | http://arxiv.org/abs/2304.08415v1 | # Exciton band structure of V\({}_{2}\)O\({}_{5}\)
###### Abstract
Excitonic effects due to the correlation of electrons and holes in excited states of matter dominate the optical spectra of many interesting materials. They are usually studied in the long-wavelength limit. Here we investigate excitons at non-vanishing momentum transfer, corresponding to shorter wavelengths. We calculate the exciton dispersion in the prototypical layered oxide V\({}_{2}\)O\({}_{5}\) by solving the Bethe-Salpeter equation of many-body perturbation theory. We discuss the change of excitation energy and intensity as a function of wavevector for bright and dark excitons, respectively, and we analyze the origin of the excitons along their dispersion. We highlight the important role of the electron-hole exchange with its impact on the exciton dispersion, the singlet-triplet splitting and the difference between the imaginary part of the macroscopic dielectric function and the loss function.
## 1 Introduction
Electronic excitations in materials happen as a response to an external perturbation [1]. In the course of this response, charge is re-arranged and the interaction plays an important role. Therefore, excitations encode much information about fundamental correlation effects, and they are crucial for many technological applications. Spectroscopies probing excitations are of great importance to understand the properties of materials, they are used as characterization tools, and they may be directly linked to questions of technological impact, such as, for example, optical absorption that is a key step in photovoltaic devices. Because of the interaction between electrons, thinking of electronic excitations as a sum of the excitation of individual electrons rapidly meets its limits [2, 3]. In particular, in optical excitations, where in a single particle picture electrons are promoted from valence to conduction states, the "hole" created in the valence band cannot be neglected. This missing electron can be described as an effective positive charge that attracts the electron in the conduction band, which leads to the so-called excitonic effects. In this way, even bound states can form that may be detected in experiments as absorption structures that lie within the fundamental electron addition-removal gap. These bound excitons, as well as strong changes in the oscillator strength of the continuum, dominate the absorption spectra of many semiconductors and | Excited state effects due to the correlation of electrons and holes in the excited states of matter dominate the optical spectra of many interesting materials. They are usually studied in the long-wavelength limit. Here we investigate excitons at non-vanishing momentum transfer, corresponding to shorter wavelengths. We calculate the exciton dispersion in the prototypical layered oxide V$_2$O$_5$ by solving the Bethe-Salpeter equation of many-body perturbation theory. We discuss the change of excitation energy and intensity as a function of wavevector for bright and dark excitons, respectively, and we analyze the origin of the excitons along their dispersion. We highlight the important role of the electron-hole exchange with its impact on the excitonic dispersion, singlet-triplet splitting and the difference between the imaginary part of the macroscopic dielectric function and the loss function. |
2307.11414 | The Derived Deligne Conjecture | Derived $A_\infty$-algebras have a wealth of theoretical advantages over
regular $A_\infty$-algebras. However, due to their bigraded nature, in practice
they are often unwieldy to work with. We develop a framework involving brace
algebras on operads which allows us to study derived $A_\infty$ algebras in a
new conceptual context. One particular advantage is that this construction
allows us to generalize the Lie algebra structure on the Hochschild complex of
an $A_\infty$-algebra, obtaining new and rigorous versions of the Deligne
conjecture. | Javier Aguilar Martín, Constanze Roitzheim | 2023-07-21T08:16:23 | http://arxiv.org/abs/2307.11414v3 | # The derived Deligne conjecture
###### Abstract.
This paper is based on the author's PhD Thesis under the supervision of Constanze Roitzheim. This research was founded by a GTA scholarship at the University of Kent. The author would also like to thank Sarah Whitehouse for her contributions.
Key words and phrases:Operads, Derived \(A_{\infty}-\)algebras, Enriched categories, Deligne conjecture 2020 Mathematics Subject Classification: 18M70, 18M60, 18N70
###### Contents
* 1 Introduction
* 2 Background and conventions
* 2.1 \(A_{\infty}\)-algebras
* 2.2 For the study of derived \(A_{\infty}\)-algebras
* 3 Operadic suspension
* 3.1 Functorial properties of operadic suspension
* 4 Brace algebras
* 4.1 Brace algebra structure on an operad
* 4.2 Reinterpretation of \(\infty\)-morphisms
* 5 \(A_{\infty}\)-algebra structures on operads
* 5.1 Iterating the process
* Commutativity of the right blue square
* Commutativity of the left blue square
* 5.2 Explicit \(A_{\infty}\)-algebra structure and Deligne conjecture
* 6 Derived \(A_{\infty}\)-algebras and filtered \(A_{\infty}\)-algebras
* 6.1 Derived \(A_{\infty}\)-algebras
* 6.2 Filtered \(A_{\infty}\)-algebras
* 7 Operadic totalization and vertical operadic suspension
###### Contents
* 1 Introduction
* 2 Preliminaries
* 3 The derived Deligne conjecture
* 4 The derived Deligne conjecture
* 5 The derived Deligne conjecture
* 6 The derived Deligne conjecture
* 7 The derived Deligne conjecture
* 8 The derived Deligne conjecture
* 9 Future research
* 9 Boundedness conditions
* 9.2 Hochschild Cohomology
* A Combinatorics
* B Koszul sign on operadic suspension
* C Sign of the braces
## 1. Introduction
There are a number of mathematical fields in which \(A_{\infty}\)-structures arise, ranging from topology to mathematical physics. To study these structures, different interpretations of \(A_{\infty}\)-algebras have been given. From the original definition in 1963 [10], to alternative definitions in terms of tensor coalgebras [11], [12], many approaches use the machinery of operads [13], [14] or certain Lie brackets [15] to obtain these objects.
Another technique to describe \(A_{\infty}\)-structures comes from brace algebras [10],[17], which often involves unwieldy sign calculations that are difficult to describe in a conceptual way.
Here we used an operadic approach to obtain these signs in a more conceptual and consistent way. As a consequence, we will generalize the Lie bracket used in [15] and will give a very simple interpretation of \(A_{\infty}\)-algebras. The difference between our operadic approach and others mentioned before is that ours uses much more elementary tools and can be used to talk about \(A_{\infty}\)-structrures on any operad. We hope that this provides a useful way of thinking about \(A_{\infty}\)-structures. A first application of this simple formulation is the generalization of the Deligne conjecture. The classical Deligne conjecture states that the Hochschild complex of an associative algebra has the structure of a homotopy \(G\)-algebra [11]. This result has its roots in the theory of topological operads [12]. Since \(A_{\infty}\)-generalize associative algebras, it is natural to ask what sort of algebraic structure arises
on their Hochschild complex. Thanks to the tools we develop, we are able to answer this question.
Later in 2009, derived \(A_{\infty}\)-algebras were introduced by Savage [14] as a bigraded generalization of \(A_{\infty}\)-algebras in order to bypass the projectivity assumptions that are often required when working with classical \(A_{\infty}\)-algebras. We generalize the operadic description of classical \(A_{\infty}\)-algebras to the derived case by means of an operadic totalization inspired by the totalization functor described in [10]. This way we obtain an operation similar to the star operation in [11] and generalize the construction that has been done for \(A_{\infty}\)-algebras to more general derived \(A_{\infty}\)-algebras.
The text is organized as follows. In Section 2 we recall some basic definitions and establish some conventions for both the classical and the derived cases. In Section 3 we define a device called _operadic suspension_ that will help us obtain the signs that we want and link this device to the classical operadic approach to \(A_{\infty}\)-algebras. We also take this construction to the level of the underlying collections of the operads to also obtain a nice description of \(\infty\)-morphisms of \(A_{\infty}\)-algebras. We then explore the functorial properties of operadic suspension, being monoidality (Proposition 3.14) the most remarkable of them. In Section 4 we study the brace algebra induced by operadic suspension and obtain a relevant result, Proposition 4.3, which establishes a relation between the canonical brace structure on an operad and the one induced by its operadic suspension. We show that as a particular case of this result we obtain the Lie bracket from [11].
Following the terminology of [10], if \(\mathcal{O}\) is an operad with an \(A_{\infty}\)-multiplication \(m\in\mathcal{O}\), it is natural to ask whether there are linear maps \(M_{j}:\mathcal{O}^{\otimes j}\to\mathcal{O}\) satisfying the \(A_{\infty}\)-algebra axioms. In Section 5 we use the aforementioned brace structure to define such linear maps on a shifted version of the operadic suspension. We then iterate this process in Section 5.1 to define an \(A_{\infty}\)-structure on the Hochschild complex of an operad with \(A_{\infty}\)-multiplication. This iteration process was inspired by the work of Getzler in [12].
Next, we prove our first main result, Theorem 5.7, which relates the \(A_{\infty}\)-structure on an operad with the one induced on its Hochschild complex. More precisely, we have the following.
**Theorem A**.: _There is a morphism of \(A_{\infty}\)-algebras \(\Phi:S\mathfrak{s}\mathcal{O}\to S\mathfrak{s}\operatorname{End}_{S \mathfrak{s}\mathcal{O}}\)._
This result was hinted at by Gerstenhaber and Voronov in [10], but here we introduce a suitable context and prove it as Theorem 5.7. We also draw a connection between our framework and the one from Gerstenhaber and Voronov. As a consequence of this theorem, if \(A\) is an \(A_{\infty}\)-algebra and \(\mathcal{O}=\operatorname{End}_{A}\) its endomorphism operad, we obtain the following \(A_{\infty}\)-version of the Deligne conjecture in Corollary 5.12.
**Theorem B**.: _The Hochschild complex \(S\mathfrak{s}\operatorname{End}_{S\mathfrak{s}\mathcal{O}}\) of an operad with an \(A_{\infty}\)-multiplication has a structure of a \(J\)-algebra._
In the above theorem, \(J\)-algebras play the role of homotopy \(G\)-algebras in the classical case [10]. After this, we move to the bigraded case. The goal for the bigraded section is showing that an operad \(\mathcal{O}\) with a derived \(A_{\infty}\)-multiplication \(m\in\mathcal{O}\) can be endowed with the structure of a derived \(A_{\infty}\)-algebra, just like in the classical case. We start recalling some definitions of derived \(A_{\infty}\)-algebras and filtered \(A_{\infty}\)-algebras in Section 6. In Section 7, we define the totalization functor for operads and then the bigraded version of operadic suspension. We combine these two constructions to define an operation that allows us to understand a derived \(A_{\infty}\)-multiplication as a Maurer-Cartan element. As a consequence we obtain the star operation that was introduced in [11], which also defines a Lie Bracket. From this, we obtain in Section 7.3 a brace structure from which we can obtain a classical \(A_{\infty}\)-algebra on the graded operad \(S\mathrm{Tot}(\mathfrak{s}\mathcal{O})\). Finally, in Section 8, we prove our main results about derived \(A_{\infty}\)-algebras. The first one is Theorem 8.3, which shows that, under mild boundedness assumptions, the \(A_{\infty}\)-structure on totalization is equivalent to a derived \(A_{\infty}\)-algebra on \(S\mathfrak{s}\mathcal{O}\).
**Theorem C**.: _For any operad \(\mathcal{O}\) with a derived \(A_{\infty}\)-multiplication there are linear maps \(M_{ij}:(S\mathfrak{s}\mathcal{O})^{\otimes j}\to S\mathfrak{s}\mathcal{O}\), satisfying the derived \(A_{\infty}\)-algebra axioms._
The next result is Theorem 8.8, which generalizes Theorem 5.7 to the derived setting. More precisely,
**Theorem D**.: _There is a morphism of derived \(A_{\infty}\)-algebras \(\Phi:S\mathfrak{s}\mathcal{O}\to S\mathfrak{s}\operatorname{End}_{S \mathfrak{s}\mathcal{O}}\)._
As a consequence of this theorem we obtain a new version of the Deligne conjecture, Corollary 8.10, this time in the setting of derived \(A_{\infty}\)-algebras. For this we also introduce a derived version of \(J\)-algebras.
**Theorem E**.: _The Hochschild complex \(S\mathfrak{s}\operatorname{End}_{S\mathfrak{s}\mathcal{O}}\) of an operad with a derived \(A_{\infty}\)-multiplication has a structure of derived \(J\)-algebra._
We finish the article in Section 9 by outlining some open question about derived \(A_{\infty}\)-algebras and their Hochschild cohomology that arise from our research.
## 2. Background and conventions
This section includes all necessary background and the conventions we use throughout the article. It is divided in two parts, one corresponding to classical \(A_{\infty}\)-algebras and another one for derived \(A_{\infty}\)-algebras. We will also discuss these topics in the language of operads, so we assume the reader to be familiar with them. For a full survey of operads we refer the reader to [13].
### \(A_{\infty}\)-algebras
Let us start by recalling some background definitions and results that we will need to study \(A_{\infty}\)-algebras, as well as stating some conventions.
We assume that the reader is familiar with the basic definitions regarding \(A_{\infty}\)-algebras and operads, but we are going to briefly recall some of them in this section to establish notation and assumptions. For more details and intuition, the reader is referred to [10] and [11, SS9.2].
Our base category is the category of graded \(R\)-modules and linear maps, where \(R\) is a fixed commutative ring with unit of characteristic not equal to \(2\). All tensor products are taken over \(R\). We denote the \(i\)-th degree component of \(A\) as \(A^{i}\). If \(x\in A^{i}\) we write \(\deg(x)=i\) and we use cohomological grading. The symmetry isomorphism is given by the following Koszul sign convention.
\[\tau_{A,B}:A\otimes B\to B\otimes A,\,x\otimes y\mapsto(-1)^{\deg(x)\deg(y)}y \otimes x\]
A map \(f:A\to B\) of degree \(i\) satisfies \(f(A^{n})\subseteq B^{n+i}\) for all \(n\). The \(R\)-modules \(\operatorname{Hom}_{R}(A,B)\) are naturally graded by
\[\operatorname{Hom}_{R}(A,B)^{i}=\prod_{k}\operatorname{Hom}_{R}(A^{k},B^{k+i}).\]
We also adopt the following Koszul sign convention: for \(x\in A\), \(y\in B\), \(f\in\operatorname{Hom}_{R}(A,C)\) and \(g\in\operatorname{Hom}_{R}(B,D)\)
\[(f\otimes g)(x\otimes y)=(-1)^{\deg(x)\deg(g)}f(x)\otimes g(y).\]
homomorphisms that raise the degree by \(\mathrm{i}\)
**Definition 2.1**.: _For a graded \(R\)-module \(A\) the shift or suspension of \(A\) is the graded \(R\)-module \(SA\) with \(SA^{i}=A^{i-1}\)._
**Definition 2.2**.: _An \(A_{\infty}\)-algebra is a graded \(R\)-module \(A\) together with a family of maps \(m_{n}:A^{\otimes n}\to A\) of degree \(2-n\) such that for all \(n\geq 1\)_
\[\sum_{r+s+t=n}(-1)^{rs+t}m_{r+t+1}(1^{\otimes r}\otimes m_{s}\otimes 1^{ \otimes t})=0. \tag{1}\]
The above equation will sometimes be referred to as the \(A_{\infty}\)_-equation_.
**Definition 2.3**.: _An \(\infty\)-morphism of \(A_{\infty}\)-algebras \(A\to B\) is a family of maps_
\[f_{n}:A^{\otimes n}\to B\]
_of degree \(1-n\) such that for all \(n\geq 1\)_
\[\sum_{r+s+t=n}(-1)^{rs+t}f_{r+1+t}(1^{\otimes r}\otimes m_{s}^{A}\otimes 1^{ \otimes t})=\sum_{i_{1}+\cdots+i_{k}=n}(-1)^{s}m_{k}^{B}(f_{i_{1}}\otimes \cdots\otimes f_{i_{k}}),\]
_where \(s=\sum_{\alpha<\beta}i_{\alpha}(1-i_{\beta})\). The composition of \(\infty\)-morphisms \(f:A\to B\) and \(g:B\to C\) is given by_
\[(gf)_{n}=\sum_{r}\sum_{i_{1}+\cdots+i_{r}=n}(-1)^{s}g_{r}(f_{i_{1}}\otimes\cdots \otimes f_{i_{r}}).\]
**Definition 2.4**.: _A morphism of \(A_{\infty}\)-algebras is a map of \(R\)-modules \(f:A\to B\) such that_
\[f(m_{j}^{A})=m_{j}^{B}\circ f^{\otimes j}.\]
### For the study of derived \(A_{\infty}\)-algebras
Now we move to the categories and conventions that we need in order to study derived \(A_{\infty}\)-algebras. The idea is that we would like to apply what we obtain in the setting of \(A_{\infty}\)-algebras on the derived setting. In order to do that, we need a way to connect a single graded category with a bigraded category. This is usually done through totalization. But in order to properly translate \(A_{\infty}\)-algebras into totalized derived \(A_{\infty}\)-algebras we need to go through several suitably enriched categories that are defined in this section. Most of the definitions come from [1, SS2] but we adapt them here to our conventions.
Let \(\mathcal{C}\) be a category and let \(A\), \(B\) be arbitrary objects in \(\mathcal{C}\). We denote by \(\operatorname{Hom}_{\mathcal{C}}(A,B)\) the set of morphisms from \(A\) to \(B\) in \(\mathcal{C}\). If \((\mathcal{C},\otimes,1)\) is a closed symmetric monoidal category, then we denote its internal hom-object by \([A,B]\in\mathcal{C}\).
#### 2.2.1. Filtered Modules and complexes
First, we collect some definitions about filtered modules and filtered complexes. Filtrations will allow to add an extra degree to single-graded objects that will be necessary in order to connect them with bigraded objects.
**Definition 2.5**.: _A filtered \(R\)-module\((A,F)\) is given by a family of \(R\)-modules\(\{F_{p}A\}_{p\in\mathbb{Z}}\) indexed by the integers such that \(F_{p}A\subseteq F_{p-1}A\) for all \(p\in\mathbb{Z}\) and \(A=\bigcup_{p}F_{p}A\). A morphism of filtered modules is a morphism \(f:A\to B\) of \(R\)-modules which is compatible with filtrations: \(f(F_{p}A)\subset F_{p}B\) for all \(p\in\mathbb{Z}\)._
We denote by \(\mathrm{C}_{R}\) the category of cochain complexes of \(R\)-modules.
**Definition 2.6**.: _A filtered complex\((K,d,F)\) is a complex \((K,d)\in\mathrm{C}_{R}\) together with a filtration \(F\) of each \(R\)-module \(K^{n}\) such that \(d(F_{p}K^{n})\subset F_{p}K^{n+1}\) for all \(p,n\in\mathbb{Z}\). Its morphisms are given by morphisms of complexes \(f:K\to L\) compatible with filtrations._
We denote by \(\operatorname{fMod}_{R}\) and \(\operatorname{fC}_{R}\) the categories of filtered modules and filtered complexes of \(R\)-modules, respectively.
**Definition 2.7**.: _The tensor product of two filtered \(R\)-modules\((A,F)\) and \((B,F)\) is the filtered \(R\)-module with_
\[F_{p}(A\otimes B):=\sum_{i+j=p}\operatorname{Im}(F_{i}A\otimes F_{j}B\to A \otimes B).\]
_This makes the category of filtered \(R\)-modules into a symmetric monoidal category, where the unit is given by \(R\) with the trivial filtration \(0=F_{1}R\subset F_{0}R=R\)._
**Definition 2.8**.: _Let \(K\) and \(L\) be filtered complexes. We define \(\underline{\mathrm{Hom}}(K,L)\) to be the filtered complex whose underlying cochain complex is \(\mathrm{Hom}_{\mathrm{C}_{R}}(K,L)\) and the filtration \(F\) given by_
\[F_{p}\underline{\mathrm{Hom}}(K,L)=\{f:K\to L\mid f(F_{q}K)\subset F_{q+p}L\text { for all }q\in\mathbb{Z}\}.\]
_In particular, \(\mathrm{Hom}_{\mathrm{Mod}_{R}}(K,L)=F_{0}\underline{\mathrm{Hom}}(K,L)\)._
#### 2.2.2. Bigraded modules, vertical bicomplexes, twisted complexes and sign conventions
We collect some basic definitions of bigraded categories that we need to use and we establish some conventions.
**Definition 2.9**.: _We consider \((\mathbb{Z},\mathbb{Z})\)-bigraded \(R\)-modules \(A=\{A_{i}^{j}\}\), where elements of \(A_{i}^{j}\) are said to have bidegree \((i,j)\). We sometimes refer to \(i\) as the horizontal degree and \(j\) the vertical degree. The total degree of an element \(x\in A_{i}^{j}\) is \(i+j\) and is denoted \(|x|\)._
**Definition 2.10**.: _A morphism of bidegree \((p,q)\) maps \(A_{i}^{j}\) to \(A_{i+p}^{j+q}\). The tensor product of two bigraded \(R\)-modules \(A\) and \(B\) is the bigraded \(R\)-module \(A\otimes B\) given by_
\[(A\otimes B)_{i}^{j}\coloneqq\bigoplus_{p,q}A_{p}^{q}\otimes B_{i-p}^{j-q}.\]
We denote by \(\mathrm{bgMod}_{R}\) the category whose objects are bigraded \(R\)-modules and whose morphisms are morphisms of bigraded \(R\)-modules of bidegree \((0,0)\). It is symmetric monoidal with the above tensor product.
We introduce the following scalar product notation for bidegrees: for \(x\), \(y\) of bidegree \((x_{1},x_{2})\), \((y_{1},y_{2})\) respectively, we let \(\langle x,y\rangle=x_{1}y_{1}+x_{2}y_{2}\).
The symmetry isomorphism is given by
\[\tau_{A\otimes B}:A\otimes B\to B\otimes A,\ x\otimes y\mapsto(-1)^{\langle x,y\rangle}y\otimes x.\]
We follow the Koszul sign rule: if \(f:A\to B\) and \(g:C\to D\) are bigraded morphisms, then the morphism \(f\otimes g:A\otimes C\to B\otimes D\) is defined by
\[(f\otimes g)(x\otimes z)\coloneqq(-1)^{\langle g,x\rangle}f(x)\otimes g(z).\]
**Definition 2.11**.: _A vertical bicomplex is a bigraded \(R\)-module \(A\) equipped with a vertical differential \(d^{A}:A\to A\) of bidegree \((0,1)\). A morphism of vertical bicomplexes is a morphism of bigraded modules of bidegree \((0,0)\) commuting with the vertical differential._
We denote by \(\mathrm{vbC}_{R}\) the category of vertical bicomplexes. The tensor product of two vertical bicomplexes \(A\) and \(B\) is given by endowing the tensor product of underlying bigraded modules with vertical differential
\[d^{A\otimes B}:=d^{A}\otimes 1+1\otimes d^{B}:(A\otimes B)_{u}^{v}\to(A \otimes B)_{u}^{v+1}.\]
This makes \(\mathrm{vbC}_{R}\) into a symmetric monoidal category.
The symmetric monoidal categories \((\mathrm{C}_{R},\otimes,R)\), \((\mathrm{bgMod}_{R},\otimes,R)\) and \((\mathrm{vbC}_{R},\otimes,R)\) are related by embeddings \(\mathrm{C}_{R}\to\mathrm{vbC}_{R}\) and \(\mathrm{bgMod}_{R}\to\mathrm{vbC}_{R}\) which are monoidal and full.
**Definition 2.12**.: _Let \(A,B\) be bigraded modules. We define \([A,B]_{*}^{*}\) to be the bigraded module of morphisms of bigraded modules \(A\to B\). Furthermore, if \(A,B\) are vertical bicomplexes, and \(f\in[A,B]_{u}^{v}\), we define_
\[\delta(f):=d_{B}f-(-1)^{v}fd_{A}.\]
**Lemma 2.13**.: _If \(A\), \(B\) are vertical bicomplexes, then \(([A,B]_{*}^{*},\delta)\) is a vertical bicomplex._
Proof.: Direct computation shows \(\delta^{2}=0\).
**Definition 2.14**.: _A twisted complex \((A,d_{m})\) is a bigraded \(R\)-module \(A=\{A_{i}^{j}\}\) together with a family of morphisms \(\{d_{m}:A\to A\}_{m\geq 0}\) of bidegree \((m,1-m)\) such that for all \(m\geq 0\),_
\[\sum_{i+j=m}(-1)^{i}d_{i}d_{j}=0.\]
**Definition 2.15**.: _A morphism of twisted complexes \(f:(A,d_{m}^{A})\to(B,d_{m}^{B})\) is given by a family of morphisms of \(R\)-modules \(\{f_{m}:A\to B\}_{m\geq 0}\) of bidegree \((m,-m)\) such that for all \(m\geq 0\),_
\[\sum_{i+j=m}d_{i}^{B}f_{j}=\sum_{i+j=m}(-1)^{i}f_{i}d_{j}^{A}.\]
_The composition of morphisms is given by \((g\circ f)_{m}:=\sum_{i+j=m}g_{i}f_{j}\)._
_A morphism \(f=\{f_{m}\}_{m\geq 0}\) is said to be strict if \(f_{i}=0\) for all \(i>0\). The identity morphism \(1_{A}:A\to A\) is the strict morphism given by \((1_{A})_{0}(x)=x.\) A morphism \(f=\{f_{i}\}\) is an isomorphism if and only if \(f_{0}\) is an isomorphism of bigraded \(R\)-modules._
Note that if \(f\) is an isomorphism, then an inverse of \(f\) is obtained from an inverse of \(f_{0}\) by solving a triangular system of linear equations.
Denote by \(\mathrm{tC}_{R}\) the category of twisted complexes. The following construction endows \(\mathrm{tC}_{R}\) with a symmetric monoidal structure, see [1, Lemma 3.3] for a proof.
**Lemma 2.16**.: _The category \((\mathrm{tC}_{R},\otimes,R)\) is symmetric monoidal, where the monoidal structure is given by the bifunctor_
\[\otimes:\mathrm{tC}_{R}\times\mathrm{tC}_{R}\to\mathrm{tC}_{R}.\]
_On objects it is given by \(((A,d_{m}^{A}),(B,d_{m}^{B}))\to(A\otimes B,d_{m}^{A}\otimes 1+1\otimes d_{m}^{ B})\) and on morphisms it is given by \((f,g)\to f\otimes g\), where \((f\otimes g)_{m}:=\sum_{i+j=m}f_{i}\otimes g_{j}\). In particular, by the Koszul sign rule we have that_
\[(f_{i}\otimes g_{j})(x\otimes z)=(-1)^{\langle g_{j},x\rangle}f_{i}(x)\otimes g _{j}(z).\]
_The symmetry isomorphism is given by the strict morphism of twisted complexes_
\[\tau_{A\otimes B}\colon A\otimes B\to B\otimes A,\ x\otimes y\mapsto(-1)^{ \langle x,y\rangle}y\otimes x.\]
The internal hom on bigraded modules can be extended to twisted complexes via the following lemma whose proof is in [1, Lemma 3.4].
**Lemma 2.17**.: _Let \(A,B\) be twisted complexes. For \(f\in[A,B]_{u}^{v}\), setting_
\[(d_{i}f):=(-1)^{i(u+v)}d_{i}^{B}f-(-1)^{v}fd_{i}^{A}\]
_for \(i\geq 0\) endows \([A,B]_{*}^{*}\) with the structure of a twisted complex._
#### 2.2.3. Totalization
Here we recall the definition of the totalization functor from [1] and some of the structure that it comes with. This functor and its enriched versions are key to establish a correspondence between \(A_{\infty}\)-algebras and derived \(A_{\infty}\)-algebras.
**Definition 2.18**.: _The totalization \(\operatorname{Tot}(A)\) of a bigraded \(R\)-module \(A=\{A_{i}^{j}\}\) the graded \(R\)-module is given by_
\[\operatorname{Tot}(A)^{n}\coloneqq\bigoplus_{i<0}A_{i}^{n-i}\oplus\prod_{i \geq 0}A_{i}^{n-i}.\]
_The column filtration of \(\operatorname{Tot}(A)\) is the filtration given by_
\[F_{p}\operatorname{Tot}(A)^{n}\coloneqq\prod_{i\geq p}A_{i}^{n-i}.\]
Given a twisted complex \((A,d_{m})\), define a map \(d:\operatorname{Tot}(A)\to\operatorname{Tot}(A)\) of degree \(1\) by letting
\[d(x)_{j}\coloneqq\sum_{m\geq 0}(-1)^{mn}d_{m}(x_{j-m})\]
for \(x=(x_{i})_{i\in\mathbb{Z}}\in\operatorname{Tot}(A)^{n}\). Here \(x_{i}\in A_{i}^{n-i}\) denotes the \(i\)-th component of \(x\), and \(d(x)_{j}\) denotes the \(j\)-th component of \(d(x)\). Note that, for a given \(j\in\mathbb{Z}\) there is a sufficiently large \(m\geq 0\) such that \(x_{j-m^{\prime}}=0\) for all \(m^{\prime}\geq m\). Hence \(d(x)_{j}\) is given by a finite sum. Also, for negative \(j\) sufficiently large, one has \(x_{j-m}=0\) for all \(m\geq 0\), which implies \(d(x)_{j}=0\).
Given a morphism \(f:(A,d_{m})\to(B,d_{m})\) of twisted complexes, let the _totalization of \(f\)_ be the map \(\operatorname{Tot}(f):\operatorname{Tot}(A)\to\operatorname{Tot}(B)\) of degree \(0\) defined by
\[(\operatorname{Tot}(f)(x))_{j}\coloneqq\sum_{m\geq 0}(-1)^{mn}f_{m}(x_{j-m})\]
for \(x=(x_{i})_{i\in\mathbb{Z}}\in\operatorname{Tot}(A)^{n}\).
The following is [1, Theorem 3.8].
**Theorem 2.19**.: _The assignments \((A,d_{m})\mapsto(\operatorname{Tot}(A),d,F)\), where \(F\) is the column filtration of \(\operatorname{Tot}(A)\), and \(f\mapsto\operatorname{Tot}(f)\) define a functor \(\operatorname{Tot}:\operatorname{tC}_{R}\to\operatorname{fC}_{R}\) which is an isomorphism of categories when restricted to its image._
For a filtered complex of the form \((\operatorname{Tot}(A),d,F)\) where \(A=\{A_{i}^{j}\}\) is a bigraded \(R\)-module, we can recover the twisted complex structure on \(A\) as follows. For all \(m\geq 0\), let \(d_{m}:A\to A\) be the morphism of bidegree \((m,1-m)\) defined by
\[d_{m}(x)=(-1)^{nm}d(x)_{i+m},\]
where \(x\in A_{i}^{n-i}\) and \(d(x)_{k}\) denotes the \(k\)-th component of \(d(x)\). Note that \(d(x)_{k}\) lies in \(A_{k}^{n+1-k}\).
We will consider the following bounded categories since the totalization functor has better properties when restricted to them.
**Definition 2.20**.: _We let \(\operatorname{tC}_{R}^{b}\), \(\operatorname{vbC}_{R}^{b}\) and \(\operatorname{bgMod}_{R}^{b}\) be the full subcategories of horizontally bounded on the right graded twisted complexes, vertical bicomplexes and bigraded modules respectively. This means that if \(A=\{A_{i}^{j}\}\) is an object of any of this categories, then there exists \(i\) such that \(A_{i^{\prime}}^{j}=0\) for \(i^{\prime}>i\)._
_We let \(\operatorname{fMod}_{R}^{b}\) and \(\operatorname{fC}_{R}^{b}\) be the full subcategories of bounded filtered modules, respectively complexes, i.e. the full subcategories of objects \((K,F)\) such that there exists some \(p\) with the property that \(F_{p^{\prime}}K^{n}=0\) for all \(p^{\prime}>p\). We refer to all of these as the bounded subcategories of \(\operatorname{tC}_{R}\), \(\operatorname{vbC}_{R}\), \(\operatorname{bgMod}_{R}\), \(\operatorname{fMod}_{R}\) and \(\operatorname{fC}_{R}\) respectively._
The following is [1, Proposition 3.11].
**Proposition 2.21**.: _The functors \(\operatorname{Tot}:\operatorname{bgMod}_{R}\to\operatorname{fMod}_{R}\) and \(\operatorname{Tot}:\operatorname{tC}_{R}\to\operatorname{fC}_{R}\) are lax symmetric monoidal with structure maps_
\[\epsilon:R\to\operatorname{Tot}(R)\text{ and }\mu=\mu_{A,B}:\operatorname{ Tot}(A)\otimes\operatorname{Tot}(B)\to\operatorname{Tot}(A\otimes B)\]
_given by \(\epsilon=1_{R}\). For \(x=(x_{i})_{i}\in\operatorname{Tot}(A)^{n_{1}}\) and \(y=(y_{j})_{j}\in\operatorname{Tot}(B)^{n_{2}}\),_
\[\mu(x\otimes y)_{k}\coloneqq\sum_{k_{1}+k_{2}=k}(-1)^{k_{1}n_{2}}x_{k_{1}} \otimes y_{k_{2}}. \tag{2}\]
_When restricted to the bounded case, \(\operatorname{Tot}:\operatorname{bgMod}_{R}^{b}\to\operatorname{fMod}_{R}^{b}\) and \(\operatorname{Tot}:\operatorname{tC}_{R}^{b}\to\operatorname{fC}_{R}^{b}\) are strong symmetric monoidal functors._
_Remark 2.22_.: There is a certain heuristic to obtain the sign appearing in the definition of \(\mu\) in Proposition 2.21. In the bounded case, we can write
\[\operatorname{Tot}(A)=\bigoplus_{i}A_{i}^{n-i}.\]
As direct sums commute with tensor products, we have
\[\operatorname{Tot}(A)\otimes\operatorname{Tot}(B)=(\bigoplus A_{i}^{n-i})\otimes \operatorname{Tot}(B)\cong\bigoplus_{i}(A_{i}^{n-i}\otimes\operatorname{Tot}(B)).\]
In the isomorphism we can interpret that each \(A_{i}^{n-i}\) passes by \(\operatorname{Tot}(B)\). Since \(\operatorname{Tot}(B)\) used total grading, we can think of this degree as being the horizontal degree, while having \(0\) vertical degree. Thus, using the Koszul sign rule we would get precisely the sign from Proposition 2.21. This explanation is just an intuition, and opens the door for other possible sign choices: what if we decide to distribute \(\operatorname{Tot}(A)\) over \(\bigoplus_{i}B_{i}^{n-i}\) instead, or if we consider the total degree as the vertical degree? These alternatives lead to other valid definitions of \(\mu\), and we will explore the consequences of some of them in Remark 7.7.
**Lemma 2.23**.: _In the conditions of Proposition 2.21 for the bounded case, the inverse_
\[\mu^{-1}:\operatorname{Tot}(A_{(1)}\otimes\cdots\otimes A_{(m)})\to \operatorname{Tot}(A_{(1)})\otimes\cdots\otimes\operatorname{Tot}(A_{(m)})\]
_is given on pure tensors (for notational convenience) as_
\[\mu^{-1}(x_{(1)}\otimes\cdots\otimes x_{(m)})=(-1)^{\sum_{j=2}^{m}n_{j}\sum_{ i=1}^{j-1}k_{i}}x_{(1)}\otimes\cdots\otimes x_{(m)}, \tag{3}\]
_where \(x_{(l)}\in(A_{(m)})_{k_{l}}^{n_{l}-k_{l}}\)._
Proof.: For the case \(m=2\),
\[\mu^{-1}:\operatorname{Tot}(A\otimes B)\to\operatorname{Tot}(A)\otimes \operatorname{Tot}(B)\]
is computed explicitly as follows. Let \(c\in\operatorname{Tot}(A\otimes B)^{n}\). By definition, we have
\[\operatorname{Tot}(A\otimes B)^{n}=\bigoplus_{k}(A\otimes B)_{k}^{n-k}= \bigoplus_{k}\bigoplus_{\begin{subarray}{c}k_{1}+k_{2}=k\\ n_{1}+n_{2}=n\end{subarray}}A_{k_{1}}^{n_{1}-k_{1}}\otimes B_{k_{2}}^{n_{2}- k_{2}}.\]
And thus, \(c=(c_{k})_{k}\) may be written as a finite sum \(c=\sum_{k}c_{k}\), where
\[c_{k}=\sum_{\begin{subarray}{c}k_{1}+k_{2}=k\\ n_{1}+n_{2}=n\end{subarray}}x_{k_{1}}^{n_{1}-k_{1}}\otimes y_{k_{2}}^{n_{2}- k_{2}}.\]
Here, we introduced superscripts to indicate the vertical degree, which, unlike in the definition of \(\mu\) (Equation (2)), is not solely determined by the horizontal degree since the total degree also varies. However we are going to omit them in what follows for simplicity of notation. Distributivity allows us to rewrite \(c\) as
\[c=\sum_{k}\bigoplus_{\begin{subarray}{c}k_{1}+k_{2}=k\\ n_{1}+n_{2}=n\end{subarray}}x_{k_{1}}\otimes y_{k_{2}}=\sum_{n_{1}+n_{2}=n} \sum_{k_{1}}\sum_{k_{2}}(x_{k_{1}}\otimes y_{k_{2}})=\sum_{n_{1}+n_{2}=n} \left(\sum_{k_{1}}x_{k_{1}}\right)\otimes\left(\sum_{k_{2}}y_{k_{2}}\right).\]
Therefore, \(\mu^{-1}\) can be defined as
\[\mu^{-1}(c)=\sum_{n_{1}+n_{2}=n}\left(\sum_{k_{1}}(-1)^{k_{1}n_{2}}x_{k_{1}} \right)\otimes\left(\sum_{k_{2}}y_{k_{2}}\right).\]
The general case follows inductively.
#### 2.2.4. Enriched categories and enriched totalization
Monoidal categories over a baseWe collect some notions and results about enriched categories from [11] and [12, SS4.2] that we will need as a categorical setting for our results on derived \(A_{\infty}\)-algebras.
**Definition 2.24**.: _Let \((\mathscr{V},\otimes,1)\) be a symmetric monoidal category and let \((\mathcal{C},\otimes,1)\) be a monoidal category. We say that \(\mathcal{C}\) is a monoidal category over \(\mathscr{V}\) if we have an external tensor product \(*:\mathscr{V}\times\mathcal{C}\to\mathcal{C}\) such that we have the following natural isomorphisms._
* \(1*X\cong X\) _for all_ \(X\in\mathcal{C}\)_,_
* \((C\otimes D)*X\cong C*(D*X)\) _for all_ \(C,D\in\mathscr{V}\) _and_ \(X\in\mathcal{C}\)_,_
* \(C*(X\otimes Y)\cong(C*X)\otimes Y\cong X\otimes(C*Y)\) _for all_ \(C\in\mathscr{V}\) _and_ \(X,Y\in\mathcal{C}\)_._
_Remark 2.25_.: We will also assume that there is a bifunctor \(\underline{\mathscr{C}}(-,-):\mathcal{C}^{op}\times\mathcal{C}\to\mathscr{V}\) such that we have natural bijections
\[\operatorname{Hom}_{\mathcal{C}}(C*X,Y)\cong\operatorname{Hom}_{\mathscr{V}}( C,\underline{\mathscr{C}}(X,Y)).\]
Under this assumption we get a \(\mathscr{V}\)-enriched category \(\underline{\mathscr{C}}\) with the same objects as \(\mathcal{C}\) and with hom-objects given by \(\underline{\mathscr{C}}(-,-)\). The unit morphism \(u_{A}:1\to\underline{\mathscr{C}}(A,A)\) corresponds to the identity map in \(\mathcal{C}\) under the adjunction, and the composition morphism is given by the adjoint of the composite
\[(\underline{\mathscr{C}}(B,C)\otimes\underline{\mathscr{C}}(A,B))*A\cong \underline{\mathscr{C}}(B,C)*(\underline{\mathscr{C}}(A,B)*A)\xrightarrow{ id*ev_{AB}}\underline{\mathscr{C}}(B,C)*B\xrightarrow{ev_{BC}}C,\]
where \(ev_{AB}\) is the adjoint of the identity \(\underline{\mathscr{C}}(A,B)\to\underline{\mathscr{C}}(A,B)\). Furthermore, \(\underline{\mathscr{C}}\) is a monoidal \(\mathscr{V}\)-enriched category, namely we have an enriched functor
\[\underline{\otimes}:\underline{\mathscr{C}}\times\underline{\mathscr{C}}\to \underline{\mathscr{C}}\]
where \(\underline{\mathscr{C}}\times\underline{\mathscr{C}}\) is the enriched category with objects \(\operatorname{Ob}(\mathcal{C})\times\operatorname{Ob}(\mathcal{C})\) and hom-objects
\[\underline{\mathscr{C}}\times\underline{\mathscr{C}}((X,Y),(W,Z))\coloneqq \underline{\mathscr{C}}(X,W)\otimes\underline{\mathscr{C}}(Y,Z).\]
In particular we get maps in \(\mathscr{V}\)
\[\underline{\mathscr{C}}(X,W)\otimes\underline{\mathscr{C}}(Y,Z)\to\underline {\mathscr{C}}(X\otimes Y,W\otimes Z),\]
given by the adjoint of the composite
\[(\underline{\mathscr{C}}(X,W)\otimes\underline{\mathscr{C}}(Y,Z))*(X\otimes Y )\cong(\underline{\mathscr{C}}(X,W)*X)\otimes(\underline{\mathscr{C}}(Y,Z)* Y)\xrightarrow{ev_{XW}\otimes ev_{YZ}}W\otimes Z.\]
**Definition 2.26**.: _Let \(\mathcal{C}\) and \(\mathcal{D}\) be monoidal categories over \(\mathscr{V}\). A lax functor over \(\mathscr{V}\) consists of a functor \(F:\mathcal{C}\to\mathcal{D}\) together with a natural transformation_
\[\nu_{F}:-\ast_{\mathcal{D}}F(-)\Rightarrow F(-\ast_{\mathcal{C}}-)\]
_which is associative and unital with respect to the monoidal structures over \(\mathscr{V}\) of \(\mathcal{C}\) and \(\mathcal{D}\). See [14, Proposition 10.1.5] for explicit diagrams stating the coherence axioms. If \(\nu_{F}\) is a natural isomorphism we say \(F\) is a functor over \(\mathscr{V}\). Let \(F,G:\mathcal{C}\to\mathcal{D}\) be lax functors over \(\mathscr{V}\). A natural transformation over \(\mathscr{V}\) is a natural transformation \(\mu:F\Rightarrow G\) such that for any \(C\in\mathscr{V}\) and for any \(X\in\mathcal{C}\) we have_
\[\nu_{G}\circ(1\ast_{\mathcal{D}}\mu_{X})=\mu_{C\ast_{\mathcal{C}}X}\circ\nu_{F}.\]
\(A\) lax monoidal functor over \(\mathscr{V}\) is a triple \((F,\epsilon,\mu)\), where \(F:\mathcal{C}\to\mathcal{D}\) is a lax functor over \(\mathscr{V}\), \(\epsilon:1_{\mathcal{D}}\to F(1_{\mathcal{C}})\) is a morphism in \(\mathcal{D}\) and
\[\mu:F(-)\otimes F(-)\Rightarrow F(-\otimes-)\]
is a natural transformation over \(\mathscr{V}\) satisfying the standard unit and associativity conditions. If \(\nu_{F}\) and \(\mu\) are natural isomorphisms then we say that \(F\) is monoidal over \(\mathscr{V}\).
The following is [18, Proposition 4.11].
**Proposition 2.27**.: _Let \(F,G:\mathcal{C}\to\mathcal{D}\) be lax functors over \(\mathscr{V}\). Then \(F\) and \(G\) extend to \(\mathscr{V}\)-enriched functors_
\[\underline{F},\underline{G}:\underline{\mathscr{C}}\to\underline{\mathscr{Q}}\]
_where \(\underline{\mathscr{C}}\) and \(\underline{\mathscr{Q}}\) denote the \(\mathscr{V}\)-enriched categories corresponding to \(\mathcal{C}\) and \(\mathcal{D}\) as described in Remark 2.25. Moreover, any natural transformation \(\mu:F\Rightarrow G\) over \(\mathscr{V}\) also extends to a \(\mathscr{V}\)-enriched natural transformation_
\[\underline{\mu}:\underline{F}\Rightarrow\underline{G}.\]
_In particular, if \(F\) is lax monoidal over \(\mathscr{V}\), then \(\underline{F}\) is lax monoidal in the enriched sense, where the monoidal structure on \(\underline{\mathscr{C}}\times\underline{\mathscr{C}}\) is described in Remark 2.25._
**Lemma 2.28**.: _Let \(F,G:\mathcal{C}\to\mathcal{D}\) lax functors over \(\mathscr{V}\) and let \(\mu:F\Rightarrow G\) a natural transformation over \(\mathscr{V}\). For every \(X\in\mathcal{C}\) and \(Y\in\mathcal{D}\) there is a map_
\[\underline{\mathscr{Q}}(GX,Y)\to\underline{\mathscr{Q}}(FX,Y)\]
_that is an isomorphism if \(\mu\) is an isomorphism._
Proof.: By Proposition 2.27 there is a \(\mathscr{V}\)-enriched natural transformation
\[\underline{\mu}:\underline{F}\to\underline{G}\]
that at each object \(X\) evaluates to
\[\underline{\mu}_{X}:1\to\underline{\mathscr{Q}}(FX,GX)\]
defined to be the adjoint of \(\mu_{X}:FX\to GX\). The map \(\underline{\mathscr{Q}}(GX,Y)\to\underline{\mathscr{Q}}(FX,Y)\) is defined as the composite
\[\underline{\mathscr{Q}}(GX,Y)\cong\underline{\mathscr{Q}}(GX,Y)\otimes 1 \xrightarrow{\ 1\otimes\underline{\mu}_{X}\ }\underline{\mathscr{Q}}(GX,Y)\otimes\underline{\mathscr{Q}}(FX,GX) \xrightarrow{c}\underline{\mathscr{Q}}(FX,Y), \tag{4}\]
where \(c\) is the composition map in the enriched setting.
When \(\mu\) is an isomorphism we may analogously define the following map
\[\underline{\mathscr{Q}}(FX,Y)\cong\underline{\mathscr{Q}}(FX,Y)\otimes 1 \xrightarrow{\ 1\otimes\underline{\mu}_{X}^{-1}\ }\underline{\mathscr{Q}}(FX,Y)\otimes\underline{\mathscr{Q}}(GX,FX) \xrightarrow{c}\underline{\mathscr{Q}}(GX,Y). \tag{5}\]
We show that this map is the inverse of the map in Equation (4).
(6)
In the above diagram (6), \(\alpha_{X}\) is adjoint to \(1_{GX}:GX\to GX\). Diagrams (1) and (2) clearly commute. Diagram (3) commutes by associativity of \(c\). Diagram (4) commutes because \(\underline{\mu}_{X}^{-1}\) and \(\underline{\mu}_{X}\) are adjoint to mutual inverses, so their composition results in the adjoint of the identity. Finally, diagram (5) commutes because we are composing with an isomorphism. In particular, diagram (5) is a decomposition of the identity map on \(\underline{\mathscr{Q}}(GX,Y)\). By commutativity, this means that the overall diagram composes to the identity, showing that the maps (4) and (5) are mutually inverse.
The following is [1, Lemma 4.15].
**Lemma 2.29**.: _The category \(\operatorname{fC}_{R}\) is monoidal over \(\operatorname{vbC}_{R}\). By restriction, \(\operatorname{fMod}_{R}\) is monoidal over \(\operatorname{bgMod}_{R}\)._
Enriched categories and totalization.: Here, we define some useful enriched categories and results from [1, SS4.3 and 4.4]. Some of them had to be modified to adjust them to our conventions.
**Definition 2.30**.: _Let \(A,B,C\) be bigraded modules. We denote by \(\underline{\mbox{\sf gzd\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
_._
3. _The composition morphism_ \(c:\underline{\mbox{\sf gzdMod}}_{R}(B,C)\otimes\underline{\mbox{\sf gzdMod}}_{R}(A,B) \rightarrow\underline{\mbox{\sf gzdMod}}_{R}(A,C)\) _is given by Definition_ 2.30_._
4. _The unit morphism_ \(R\rightarrow\underline{\mbox{\sf gzdMod}}_{R}(A,A)\) _is given by the morphism of bigraded modules that sends_ \(1\in R\) _to_ \(1_{A}:A\to A\)_, the strict morphism given by the identity of_ \(A\)_._
**Definition 2.35**.: _The \(\mathrm{vbc}_{R}\)-enriched category of twisted complexes \(\underline{tC}_{R}\) is the enriched category given by the following data._
1. _The objects of_ \(\underline{tC}_{R}\) _are twisted complexes._
2. _For_ \(A,B\) _twisted complexes the hom-object is the vertical bicomplex_ \(\underline{tC}_{R}(A,B)\)_._
3. _The composition morphism_ \(c:\underline{tC}_{R}(B,C)\otimes\underline{tC}_{R}(A,B)\rightarrow\underline{ tC}_{R}(A,C)\) _is given by Definition_ 2.30_._
4. _The unit morphism_ \(R\rightarrow\underline{tC}_{R}(A,A)\) _is given by the morphism of vertical bicomplexes sending_ \(1\in R\) _to_ \(1_{A}:A\to A\)_, the strict morphism of twisted complexes given by the identity of_ \(A\)_._
The next tensor corresponds to \(\underline{\otimes}\) in the categorical setting of Remark 2.25, see [1, Lemma 4.27].
**Lemma 2.36**.: _The monoidal structure of \(\underline{tC}_{R}\) is given by the following map of vertical bicomplexes._
\[\begin{split}\underline{\otimes}:\underline{tC}_{R}(A,B)\otimes \underline{tC}_{R}(A^{\prime},B^{\prime})\rightarrow\underline{tC}_{R}(A \otimes A^{\prime},B\otimes B^{\prime})\\ (f,g)\rightarrow(f\underline{\otimes}g)_{m}:=\sum_{i+j=m}(-1)^{ ij}f_{i}\otimes g_{j}\end{split}\]
_The monoidal structure of \(\underline{\mbox{\sf gzdMod}}_{R}\) is given by the restriction of this map._
**Definition 2.37**.: _The \(\mathrm{bgMod}_{R}\)-enriched category of filtered modules \(\underline{\mbox{\sf gMod}}_{R}\) is the enriched category given by the following data._
1. _The objects of_ \(\underline{\mbox{\sf fMod}}_{R}\) _are filtered modules._
2. _For filtered modules_ \((K,F)\) _and_ \((L,F)\)_, the bigraded module_ \(\underline{\mbox{\sf fMod}}_{R}(K,L)\) _is given by_ \[\underline{\mbox{\sf fMod}}_{R}(K,L)_{u}^{v}:=\{f:K\to L\mid f(F_{q}K^{m}) \subset F_{q+u}L^{m+u+v},\forall m,q\in\mathbb{Z}\}.\]
3. _The composition morphism is given by_ \(c(f,g)=(-1)^{u|g|}fg\)_, where_ \(f\) _has bidegree_ \((u,v)\)_._
4. _The unit morphism is given by the map_ \(R\rightarrow\underline{\mbox{\sf fMod}}_{R}(K,K)\) _given by_ \(1\to 1_{K}\)_._
**Definition 2.38**.: _Let \((K,d^{K},F)\) and \((L,d^{L},F)\) be filtered complexes. We define \(\underline{\mbox{\sf fC}}_{R}(K,L)\) to be the vertical bicomplex whose underlying bigraded module is \(\underline{\mbox{\sf fMod}}_{R}(K,L)\) with vertical differential_
\[\delta(f):=c(d^{L},f)-(-1)^{\langle f,d^{K}\rangle}c(f,d^{K})=d^{L}f-(-1)^{v+u }fd^{K}=d^{L}f-(-1)^{|f|}fd^{K}\]
_for \(f\in\underline{\text{Mod}}_{\underline{R}}(K,L)_{u}^{v}\), where \(c\) is the composition map from Definition 2.37._
**Definition 2.39**.: _The \(\operatorname{vbC}_{R}\)-enriched category of filtered complexes \(\underline{\text{fC}}_{\underline{R}}\) is the enriched category given by the following data._
1. _The objects of_ \(\underline{\text{fC}}_{\underline{R}}\) _are filtered complexes._
2. _For_ \(K,L\) _filtered complexes the hom-object is the vertical bicomplex_ \(\underline{\text{fC}}_{\underline{R}}(K,L)\)_._
3. _The composition morphism is given as in_ \(\underline{\text{fMod}}_{\underline{R}}\) _in Definition_ 2.37_._
4. _The unit morphism is given by the map_ \(R\to\underline{\text{fC}}_{\underline{R}}(K,K)\) _given by_ \(1\to 1_{K}\)_. We denote by_ \(\underline{\text{sfC}}_{\underline{R}}\) _the full subcategory of_ \(\underline{\text{fC}}_{\underline{R}}\) _whose objects are split filtered complexes._
The enriched monoidal structure is given as follows and can be found in [11, Lemma 4.36].
**Definition 2.40**.: _The monoidal structure of \(\underline{\text{fC}}_{\underline{R}}\) is given by the following map of vertical bicomplexes._
\[\underline{\otimes}:\underline{\text{fC}}_{\underline{R}}(K,L) \otimes\underline{\text{fC}}_{\underline{R}}(K^{\prime},L^{\prime})\to \underline{\text{fC}}_{\underline{R}}(K\otimes K^{\prime},L\otimes L^{\prime}),\] \[(f,g)\mapsto f\underline{\otimes}g:=(-1)^{u|g|}f\otimes g\]
_Here \(u\) is the horizontal degree of \(f\)._
The proof of the following lemma is included in the proof of [11, Lemma 4.35].
**Lemma 2.41**.: _Let \(A\) be a vertical bicomplex that is horizontally bounded on the right and let \(K\) and \(L\) be filtered complexes. There is a natural bijection_
\[\operatorname{Hom}_{\text{fC}_{R}}(\operatorname{Tot}(A)\otimes K,L)\cong \operatorname{Hom}_{\operatorname{vbC}_{R}}(A,\underline{\text{fC}}_{ \underline{R}}(K,L))\]
_given by \(f\mapsto\tilde{f}:a\mapsto(k\mapsto f(a\otimes k))\)._
We now define an enriched version of the totalization functor.
**Definition 2.42**.: _Let \(A,B\) be bigraded modules and \(f\in\underline{\text{bgMod}}_{\underline{R}}(A,B)_{u}^{v}\) we define_
\[\operatorname{Tot}(f)\in\underline{\text{fMod}}_{\underline{R}}(\operatorname {Tot}(A),\operatorname{Tot}(B))_{u}^{v}\]
_to be given on any \(x\in\operatorname{Tot}(A)^{n}\) by_
\[(\operatorname{Tot}(f)(x)))_{j+u}:=\sum_{m\geq 0}(-1)^{(m+u)n}f_{m}(x_{j-m}) \in B_{j+u}^{n-j+v}\subset\operatorname{Tot}(B)^{n+u+v}.\]
_Let \(K=\operatorname{Tot}(A)\), \(L=\operatorname{Tot}(B)\) and \(g\in\underline{\text{fMod}}_{\underline{R}}(K,L)_{u}^{v}\). We define_
\[f:=\operatorname{Tot}^{-1}(g)\in\underline{\text{bgMod}}_{\underline{R}}(A,B )_{u}^{v}\]
_to be \(f:=(f_{0},f_{1},\dots)\) where \(f_{i}\) is given on each \(A_{j}^{m+j}\) by the composite_
\[f_{i}:A_{j}^{m-j}\hookrightarrow\prod_{k\geq j}A_{k}^{m-k} =F_{j}(\operatorname{Tot}(A)^{m})\xrightarrow{g}F_{j+u}( \operatorname{Tot}(B)^{m+u+v})\] \[=\prod_{l\geq j+u}B_{l}^{m+u+v-l}\xrightarrow{\times(-1)^{(i+u)m} }B_{j+u+i}^{m-j+v-i},\]
_where the last map is a projection and multiplication with the indicated sign._
The following is [1, Theorem 4.39].
**Theorem 2.43**.: _Let \(A\), \(B\) be twisted complexes. The assignments \(\mathfrak{Ind}(A):=\operatorname{Tot}(A)\) and_
\[\mathfrak{Ind}_{A,B}:\underline{t}\mathcal{C}_{R}(A,B)\to\underline{f} \mathcal{C}_{R}(\operatorname{Tot}(A),\operatorname{Tot}(B)),\,f\mapsto \operatorname{Tot}(f)\]
_define a \(\operatorname{vbC}_{R}\)-enriched functor \(\mathfrak{Ind}:\underline{t}\mathcal{C}_{R}\to\underline{f}\mathcal{C}_{R}\) which restricts to an isomorphism onto its image. Furthermore, this functor restricts to a \(\operatorname{bgMod}_{R}\)-enriched functor_
\[\mathfrak{Ind}:\underline{\operatorname{bgMod}_{R}}\to\underline{f} \underline{\operatorname{Mod}_{R}}\]
_which also restricts to an isomorphism onto its image._
We now define an enriched endomorphism operad.
**Definition 2.44**.: _Let \(\underline{\mathscr{C}}\) be a monoidal \(\mathscr{V}\)-enriched category and \(A\) an object of \(\underline{\mathscr{C}}\). We define \(\underline{\operatorname{End}}_{A}\) to be the collection in \(\mathscr{V}\) given by_
\[\underline{\operatorname{End}}_{A}(n):=\underline{\mathscr{C}}(A^{\otimes n},A) \text{ for }n\geq 1.\]
The following contains several results results from [1].
**Proposition 2.45**.:
* _The enriched functors_ \[\mathfrak{Ind}:\underline{\operatorname{bgMod}_{R}}\to\underline{f} \underline{\operatorname{Mod}_{R}},\hskip 28.452756pt\mathfrak{Ind}:\underline{t} \mathcal{C}_{R}\to\underline{f}\mathcal{C}_{R}\] _are lax symmetric monoidal in the enriched sense and when restricted to the bounded case they are strong symmetric monoidal in the enriched sense._
* _For_ \(A\in\underline{\mathscr{C}}\)_, the collection_ \(\underline{\operatorname{End}}_{A}\) _defines an operad in_ \(\mathscr{V}\)_._
* _Let_ \(\mathcal{C}\) _and_ \(\mathcal{D}\) _be monoidal categories over_ \(\mathscr{V}\)_. Let_ \(F:\mathcal{C}\to\mathcal{D}\) _be a lax monoidal functor over_ \(\mathscr{V}\)_. Then for any_ \(X\in\mathcal{C}\) _there is an operad morphism_ \[\underline{\operatorname{End}}_{X}\to\underline{\operatorname{End}}_{F(X)}.\]
**Lemma 2.46**.: _Let \(A\) be a twisted complex. Consider \(\underline{\operatorname{End}}_{A}(n)=\underline{t}\mathcal{C}_{R}(A^{\otimes n },A)\) and \(\underline{\operatorname{End}}_{\operatorname{Tot}(A)}(n)=\underline{f} \mathcal{C}_{R}(\operatorname{Tot}(A)^{\otimes n},\operatorname{Tot}(A))\). There is a morphism of operads_
\[\underline{\operatorname{End}}_{A}\to\underline{\operatorname{End}}_{ \operatorname{Tot}(A)},\]
which is an isomorphism of operads if \(A\) is bounded. The same holds true if \(A\) is just a bigraded module. In that case, we use the enriched operads \(\underline{\mathcal{F}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Putting all this together, we get the map
\[\underline{\mathcal{E}nd}_{\operatorname{Tot}(A)}\to\underline{\mathcal{E}nd}_{A} \text{, }f\mapsto\operatorname{Tot}^{-1}(c(f,\mu^{-1}))\text{.}\]
Since the total degree of \(\mu^{-1}\) is \(0\), composition reduces to \(c(f,\mu^{-1})=f\circ\mu^{-1}\) and we get the desired map.
## 3. Operadic suspension
In this section we define an operadic suspension, which is a slight modification of the one found in [10]. This construction will help us define \(A_{\infty}\)-multiplications in a simple way. The motivation to introduce operadic suspension is that signs in \(A_{\infty}\)-algebras and related Lie structures are know to arise from a sequence of shifts. In order to discuss derived structures later, we need to pin this down more generally and rigorously. We are going to work only with non-symmetric operads, although most of what we do is also valid in the symmetric case.
Let \(\Lambda(n)=S^{n-1}R\), where \(S\) is the shift of graded modules, so that \(\Lambda(n)\) is the ring \(R\) concentrated in degree \(n-1\). This module can be realized as the free \(R\)-module of rank one spanned by the exterior power \(e^{n}=e_{1}\wedge\dots\wedge e_{n}\) of degree \(n-1\), where \(e_{i}\) is the \(i\)-th element of the canonical basis of \(R^{n}\). By convention, \(\Lambda(0)\) is one-dimensional concentrated in degree \(-1\) and generated by \(e^{0}\).
Let us define an operad structure on \(\Lambda=\{\Lambda(n)\}_{n\geq 0}\) via the following insertion maps
(7)
We are inserting the second factor onto the first one, so the sign can be explained by moving the power \(e^{m}\) of degree \(m-1\) to the \(i\)-th position of \(e^{n}\) passing by \(e_{n}\) through \(e_{i+1}\). More compactly,
\[e^{n}\circ_{i}e^{m}=(-1)^{(n-i)(m-1)}e^{n+m-1}\text{.}\]
The unit of this operad is \(e^{1}\in\Lambda(1)\). It can be checked by direct computation that \(\Lambda\) satisfies the axioms of an operad of graded modules.
In a similar way we can define \(\Lambda^{-}(n)=S^{1-n}R\), with the same insertion maps.
**Definition 3.1**.: _Let \(\mathcal{O}\) be an operad. The operadic suspension \(\mathfrak{s}\mathcal{O}\) of \(\mathcal{O}\) is given arity-wise by \(\mathfrak{s}\mathcal{O}(n)=\mathcal{O}(n)\otimes\Lambda(n)\) with diagonal composition. Similarly, we define the operadic desuspension arity-wise as \(\mathfrak{s}^{-1}\mathcal{O}(n)=\mathcal{O}(n)\otimes\Lambda^{-}(n)\)._
Even though the elements of \(\mathfrak{s}\mathcal{O}\) are tensor products of the form \(x\otimes e^{n}\), we may identify the elements of \(\mathcal{O}\) with the elements the elements of \(\mathfrak{s}\mathcal{O}\) and simply write \(x\) as an abuse of notation.
**Definition 3.2**.: _For \(x\in\mathcal{O}(n)\) of degree \(\deg(x)\), its natural degree \(|x|\) is the degree of \(x\otimes e^{n}\) as an element of \(\mathfrak{s}\mathcal{O}\), namely, \(|x|=\deg(x)+n-1\). To distinguish both degrees we call \(\deg(x)\) the internal degree of \(x\), since this is the degree that \(x\) inherits from the grading of \(\mathcal{O}\)._
If we write \(\circ_{i}\) for the operadic insertion on \(\mathcal{O}\) and \(\tilde{\circ}_{i}\) for the operadic insertion on \(\mathfrak{s}\mathcal{O}\), we may find a relation between the two insertion maps in the following way.
**Lemma 3.3**.: _For \(x\in\mathcal{O}(n)\) and \(y\in\mathcal{O}(m)\) we have_
\[x\tilde{\circ_{i}}y=(-1)^{(n-1)(m-1)+(n-1)\deg(y)+(i-1)(m-1)}x \circ_{i}y. \tag{8}\]
Proof.: Let \(x\in\mathcal{O}(n)\) and \(y\in\mathcal{O}(m)\), and let us compute \(x\tilde{\circ}y\).
\[\mathfrak{s}\mathcal{O}(n)\otimes\mathfrak{s}\mathcal{O}(m) =(\mathcal{O}(n)\otimes\Lambda(n))\otimes(\mathcal{O}(m)\otimes \Lambda(m))\cong(\mathcal{O}(n)\otimes\mathcal{O}(m))\otimes(\Lambda(n) \otimes\Lambda(m))\] \[\xrightarrow{\circ_{i}\otimes\circ_{i}}\mathcal{O}(m+n-1)\otimes \Lambda(n+m-1)=\mathfrak{s}\mathcal{O}(n+m-1).\]
The symmetric monoidal structure produces the sign \((-1)^{(n-1)\deg(y)}\) in the isomorphism \(\Lambda(n)\otimes\mathcal{O}(m)\cong\mathcal{O}(m)\otimes\Lambda(n)\), and the operadic structure of \(\Lambda\) produces the sign \((-1)^{(n-i)(m-1)}\), so
\[x\tilde{\circ_{i}}y=(-1)^{(n-1)\deg(y)+(n-i)(m-1)}x\circ_{i}y.\]
Now we can rewrite the exponent using that we have mod \(2\)
\[(n-i)(m-1)=(n-1-i-1)(m-1)=(n-1)(m-1)+(i-1)(m-1)\]
so we conclude
\[x\tilde{\circ_{i}}y=(-1)^{(n-1)(m-1)+(n-1)\deg(y)+(i-1)(m-1)}x \circ_{i}y.\]
_Remark 3.4_.: The sign from Lemma 3.3 is exactly the sign in [11] from which the sign in the equation defining \(A_{\infty}\)-algebras (eq. (1)) is derived. This means that if \(m_{s}\in\mathcal{O}(s)\) has degree \(2-s\) and \(m_{r+1+t}\in\mathcal{O}(r+1+t)\) has degree \(1-r-t\), abusing of notation we get
\[m_{r+1+t}\tilde{\circ}_{r+1}m_{s}=(-1)^{rs+t}m_{r+1+t}\circ_{r+1}m_{s}.\]
Next, we are going to use the above fact to obtain a way to describe \(A_{\infty}\)-algebras in simplified operadic terms. We are also going to compare this description with a classical approach that is more general but requires heavier operadic machinery.
**Definition 3.5**.: _An operad \(\mathcal{O}\) has an \(A_{\infty}\)-multiplication if there is a map \(\mathcal{A}_{\infty}\to\mathcal{O}\) from the operad of \(A_{\infty}\)-algebras._
Therefore, we have the following.
**Lemma 3.6**.: _An \(A_{\infty}\)-multiplication on an operad \(\mathcal{O}\) is equivalent to an element \(m\in\mathfrak{s}\mathcal{O}\) of degree 1 concentrated in positive arity such that \(m\tilde{\circ}m=0\), where \(x\tilde{\circ}y=\sum_{i}x\tilde{\circ}_{i}y\)._
Proof.: By definition, an \(A_{\infty}\)-multiplication on \(\mathcal{O}\) corresponds to a map of operads
\[f:\mathcal{A}_{\infty}\to\mathcal{O}.\]
Such a map is determined by the images of the generators \(\mu_{i}\in\mathcal{A}_{\infty}(i)\) of degree \(2-i\). Whence, \(f\) it is determined by \(m_{i}=f(\mu_{i})\in\mathcal{O}(i)\). Let \(m=m_{1}+m_{2}+\cdots\). Since
\[\deg(m_{i})=\deg(\mu_{i})=2-i,\]
we have that the image of \(m_{i}\) in \(\mathfrak{s}\mathcal{O}\) is of degree \(2-i+i-1=1\). Therefore, \(m\in\mathfrak{s}\mathcal{O}\) is homogeneous of degree 1. The fact that \(m\tilde{\circ}m=0\) follows from Remark 3.4 and \(f\) being a map of operads.
Conversely, if \(m\in\mathfrak{s}\mathcal{O}\) of degree 1 such that \(m\tilde{\circ}m=0\), let \(m_{i}\) be the component of \(m\) lying in arity \(i\). We have \(m=m_{1}+m_{2}+\cdots\). By the usual identification, \(m_{i}\) has degree \(1-i+1=2-i\) in \(\mathcal{O}\). Now we can use Equation (8) to conclude that \(m\tilde{\circ}m=0\) implies
\[\sum_{\begin{subarray}{c}r+s+t\\ r,t\geq 0,\ s\geq 1\end{subarray}}\,(-1)^{rs+t}m_{r+1+t}\circ_{r+1}m_{s}=0.\]
This shows that the elements \(m_{i}\) determine a map \(f:\mathcal{A}_{\infty}\to\mathcal{O}\) defined on generators by \(f(\mu_{i})=m_{i}\), as desired.
_Remark 3.7_.: An \(A_{\infty}\)-multiplication on the operad \(\operatorname{End}_{A}\) is equivalent to an \(A_{\infty}\)-algebra structure on \(A\).
Recall that the Koszul dual cooperad \(\mathcal{A}s^{\mathrm{i}}\) of the associative operad \(\mathcal{A}s\) is \(k\mu_{n}\) in arity \(n\), where \(\mu_{n}\) has degree \(n-1\) for \(n\geq 1\). Thus, for a graded module \(A\), we have the following operad isomorphisms, where the notation \((\geq 1)\) means that we are taking the reduced sub-operad with trivial arity 0 component.
\[\operatorname{Hom}(\mathcal{A}s^{\mathrm{i}},\operatorname{End}_{A})\cong \operatorname{End}_{S^{-1}A}(\geq 1)\cong\mathfrak{s}\operatorname{End}_{A}(\geq 1).\]
The first operad is the convolution operad, see [12, SS6.4.1], for which
\[\operatorname{Hom}(\mathcal{A}s^{\mathrm{i}},\operatorname{End}_{A})(n)= \operatorname{Hom}_{R}(\mathcal{A}s^{\mathrm{i}}(n),\operatorname{End}_{A}(n)).\]
Explicitly, for \(f\in\operatorname{End}_{A}(n)\) and \(g\in\operatorname{End}_{A}(m)\), the convolution product is given by
\[f\star g=\sum_{i=1}^{n}(-1)^{(n-1)(m-1)+(n-1)\deg(b)+(i-1)(m-1)}f\circ_{i}g= \sum_{i=1}^{n}f\tilde{\circ}_{i}g=f\tilde{\circ}g.\]
It is known that \(A_{\infty}\)-structures on \(A\) are determined by elements \(\varphi\in\operatorname{Hom}(\mathcal{A}\mathfrak{s}^{\mathrm{i}},\operatorname{ End}_{A})\) of degree \(1\) such that \(\varphi\star\varphi=0\)[12, Proposition 10.1.3]. Since the convolution product coincides with the operation \(\tilde{\circ}\), such an element \(\varphi\) is sent via the above isomorphism to an element \(m\in\mathfrak{s}\operatorname{End}_{A}(\geq 1)\) of degree \(1\) satisfying \(m\tilde{\circ}m=0\). Therefore, we see that this classical interpretation of \(A_{\infty}\)-algebras is equivalent to the one that Lemma 3.6 provides in the case of the operad \(\operatorname{End}_{A}\). See [12, Proposition 10.1.11] for more details about convolution operads and the more classical operadic interpretion of \(A_{\infty}\)-agebras, taking into account that in the dg-setting the definition has to be modified slightly (also the difference in sign conventions arise from the choice of the isomorphism \(\operatorname{End}_{SA}\cong\mathfrak{s}^{-1}\operatorname{End}_{A}\), see Theorem 3.9).
What is more, replacing \(\operatorname{End}_{A}\) by any operad \(\mathcal{O}\) and doing similar calculations to [12, Proposition 10.1.11], we retrieve the notion of \(A_{\infty}\)-multiplication on \(\mathcal{O}\) given by Definition 3.5.
_Remark 3.8_.: Above we needed to specify that only positive arity was considered. This is the case in many situations in literature, but for our purposes, we cannot assume that operads have trivial component in arity \(0\) in general, and this is what forces us to specify that \(A_{\infty}\)-multiplications are concentrated in positive arity.
Let us expose the relation between operadic suspension and the usual suspension or shift of graded modules.
**Theorem 3.9**.: _([13, Chapter 3, Lemma 3.16]) Given a graded \(R\)-module \(A\), there is an isomorphism of operads \(\sigma^{-1}:\operatorname{End}_{SA}\cong\mathfrak{s}^{-1}\operatorname{End}_{A}\)._
The original statement is about vector spaces, but it is still true when \(R\) is not a field. In the case of the operadic suspension defined above, the isomorphism is given by \(\sigma^{-1}(F)=(-1)^{\binom{n}{2}}S^{-1}\circ F\circ S^{\otimes n}\) for \(F\in\operatorname{End}_{SA}(n)\). The symbol \(\circ\) here is just composition of maps. Note that we are using the identification of elements of \(\operatorname{End}_{A}\) with those in \(\mathfrak{s}^{-1}\operatorname{End}_{A}\). The notation \(\sigma^{-1}\) comes from [11], where a twisted version of this map is the inverse of a map \(\sigma\). Here, we define \(\sigma:\operatorname{End}_{A}(n)\to\operatorname{End}_{SA}(n)\) as the map of graded modules given by \(\sigma(f)=S\circ f\circ(S^{-1})^{\otimes n}\).
In [11] the sign for the insertion maps was obtained by computing \(\sigma^{-1}(\sigma(x)\circ_{i}\sigma(y))\). This can be interpreted as sending \(x\) and \(x\) from \(\operatorname{End}_{A}\) to \(\operatorname{End}_{SA}\) via \(\sigma\) (which is a map of graded modules, not of operads), and then applying the isomorphism induced by \(\sigma^{-1}\). In the end this is the same as simply sending \(x\) and \(y\) to their images in \(\mathfrak{s}^{-1}\operatorname{End}_{A}\).
Even though \(\sigma\) is only a map of graded modules, it can be shown in a completely analogous way to Theorem 3.9 that \(\overline{\sigma}=(-1)^{\binom{n}{2}}\sigma\) induces an isomorphism of operads
\[\overline{\sigma}:\operatorname{End}_{A}\cong\mathfrak{s}\operatorname{End}_{ SA}. \tag{9}\]
This isomorphism can also be proved using the isomorphism \(\mathfrak{s}\mathfrak{s}^{-1}\mathcal{O}\cong\mathcal{O}\) from Lemma 3.10, namely, since \(\operatorname{End}_{SA}\cong\mathfrak{s}^{-1}\operatorname{End}_{A}\), we have \(\mathfrak{s}\operatorname{End}_{SA}\cong\mathfrak{s}\mathfrak{s}^{-1} \operatorname{End}_{A}\cong\operatorname{End}_{A}.\) In this case, the isomorphism map that we obtain goes in the opposite direction to \(\overline{\sigma}\), and it is precisely its inverse.
**Lemma 3.10**.: _There are isomorphisms of operads \(\mathfrak{s}^{-1}\mathfrak{s}\mathcal{O}\cong\mathcal{O}\cong\mathfrak{s} \mathfrak{s}^{-1}\mathcal{O}\)._
Proof.: We are only showing the first isomorphism since the other one is analogous. Note that as graded \(R\)-modules,
\[\mathfrak{s}^{-1}\mathfrak{s}\mathcal{O}(n)=\mathcal{O}(n)\otimes S^{1-n}R \otimes S^{n-1}R\cong\mathcal{O}(n),\]
and any automorphism of \(\mathcal{O}(n)\) determines such an isomorphism. Therefore, we are going to find an automorphism \(f\) of \(\mathcal{O}(n)\) such that the above isomorphism induces a map of operads. Observe that the insertion in \(\mathfrak{s}^{-1}\mathfrak{s}\mathcal{O}\) differs from that of \(\mathcal{O}\) in just a sign. The insertion on \(\mathfrak{s}^{-1}\mathfrak{s}\mathcal{O}\) is defined as the composition of the isomorphism
\[(\mathcal{O}(n)\otimes\Lambda(n)\otimes\Lambda^{-}(n))\otimes( \mathcal{O}(m)\otimes\Lambda(m)\otimes\Lambda^{-}(m))\cong\] \[(\mathcal{O}(m)\otimes\mathcal{O}(m))\otimes(\Lambda(n)\otimes \Lambda(m))\otimes(\Lambda^{-}(n)\otimes\Lambda^{-}(m))\]
with the tensor product of the insertions corresponding to each operad. After cancellations, the only sign left is \((-1)^{(n-1)(m-1)}\). So we need to find an automorphism \(f\) of \(\mathcal{O}\) such that, for \(x\in\mathcal{O}(n)\) and \(y\in\mathcal{O}(m)\),
\[f(x\circ_{i}y)=(-1)^{(n-1)(m-1)}f(x)\circ_{i}f(y).\]
By Lemma A.1, \(f(x)=(-1)^{\binom{n}{2}}x\) is such an automorphism.
### Functorial properties of operadic suspension
Here we study operadic suspension at the level of the underlying collections as an endofunctor. Recall that a collection is a family \(\mathcal{O}=\{\mathcal{O}(n)\}_{n\geq 0}\) of graded \(R\)-modules.
We define the suspension of a collection \(\mathcal{O}\) as \(\mathfrak{s}\mathcal{O}(n)=\mathcal{O}(n)\otimes S^{n-1}R\), where \(S^{n-1}R\) is the ground ring concentrated in degree \(n-1\). We first show that \(\mathfrak{s}\) is a functor both on collections and on operads. Given a morphism of collections \(f:\mathcal{O}\to\mathcal{P}\), there is an obvious induced morphism
\[\mathfrak{s}f:\mathfrak{s}\mathcal{O}\to\mathfrak{s}\mathcal{P},\ \mathfrak{s}f(x\otimes e^{n})=f(x)\otimes e^{n}. \tag{10}\]
Since morphisms of collections preserve arity, this map is well defined because \(e^{n}\) is the same for \(x\) and \(f(x)\). Note that if \(f\) is homogeneous, \(\deg(\mathfrak{s}f)=\deg(f)\).
**Lemma 3.11**.: _The assignment \(\mathcal{O}\mapsto\mathfrak{s}\mathcal{O}\) and \(f\mapsto\mathfrak{s}f\) is a functor on both the category \(\operatorname{Col}\) of collections and the category \(\operatorname{Op}\) of operads._
Proof.: The assignment preserves composition of maps. Indeed, given \(g:\mathcal{P}\to\mathcal{C}\), by definition \(\mathfrak{s}(g\circ f)(x\otimes e^{n})=g(f(x))\otimes e^{n}\), and also
\[(\mathfrak{s}g\circ\mathfrak{s}f)(x\otimes e^{n})=\mathfrak{s}g(f(x)\otimes e^{ n})=g(f(x))\otimes e^{n}.\]
This means that \(\mathfrak{s}\) defines an endofunctor on the category \(\operatorname{Col}\) of collections.
We know that when \(\mathcal{O}\) is an operad, \(\mathfrak{s}\mathcal{O}\) is again an operad. What is more, if \(f\) is a map of operads, then the map \(\mathfrak{s}f\) is again a map of operads, since for \(a\in\mathcal{O}(n)\) and \(b\in\mathcal{O}(m)\) we have
\[\mathfrak{s}f(x\tilde{\circ}_{i}y) =\mathfrak{s}f((x\otimes e^{n})\tilde{\circ}_{i}(y\otimes e^{m}))\] \[=(-1)^{(n-1)\deg(y)+(n-i)(m-1)}\mathfrak{s}f((x\circ_{i}y) \otimes e^{n+m-1})\] \[=(-1)^{(n-1)\deg(y)+(n-i)(m-1)}f(x\circ_{i}y)\otimes e^{n+m-1}\] \[=(-1)^{(n-1)\deg(y)+(n-i)(m-1)+\deg(f)\deg(x)}(f(x)\circ_{i}f(y) )\otimes e^{n+m-1}\] \[=(-1)^{(n-1)\deg(y)+(n-1)(\deg(y)+\deg(f))+\deg(f)\deg(x)}(f(x) \otimes e^{n})\tilde{\circ}_{i}(f(y)\otimes e^{m})\] \[=(-1)^{\deg(f)(\deg(x)+n-1)}\mathfrak{s}f(x)\tilde{\circ}_{i} \mathfrak{s}f(y).\]
Note that \(\deg(x)+n-1\) is the degree of \(x\otimes e^{n}\) and as we said \(\deg(\mathfrak{s}f)=\deg(f)\), so the above relation is consistent with the Koszul sign rule. In any case, recall that a morphism of operads is necessarily of degree \(0\), but the above calculation hints at some monoidality properties of \(\mathfrak{s}\) that we will study afterwards. Clearly \(\mathfrak{s}f\) preserves the unit, so \(\mathfrak{s}f\) is a morphism of operads.
The fact that \(\mathfrak{s}\) is a functor allows to describe algebras over operads using operadic suspension. For instance, an \(A_{\infty}\)-algebra is a map of operads \(\mathcal{O}\to\mathcal{P}\) where \(\mathcal{O}\) is an operad with \(A_{\infty}\)-multiplication. Since \(\mathfrak{s}\) is a functor, this map corresponds to a map \(\mathfrak{s}\mathcal{O}\to\mathfrak{s}\mathcal{P}\). Since in addition the map \(\mathfrak{s}\mathcal{O}\to\mathfrak{s}\mathcal{P}\) is fully determined by the original map \(\mathcal{O}\to\mathcal{P}\), this correspondence is bijective, and algebras over \(\mathcal{O}\) are equivalent to algebras over \(\mathfrak{s}\mathcal{O}\). In fact, using Lemma 3.10, it is not hard to show the following.
**Proposition 3.12**.: _The functor \(\mathfrak{s}\) is an equivalence of categories both at the level of collections and at the level of operads. _
In particular, for \(A_{\infty}\)-algebras it is more convenient to work with \(\mathfrak{s}\mathcal{O}\) since the formulation of an \(A_{\infty}\)-multiplication on this operad is much simpler but we do not lose any information.
#### 3.1.1. Monoidal properties of operadic suspension
Now we are going to explore the monoidal properties of operadic suspension. Since operads are precisely monoids on the category \(\operatorname{Col}\) of collections, we have the following.
**Proposition 3.13**.: _The endofunctor \(\mathfrak{s}:\operatorname{Col}\to\operatorname{Col}\) induces a well-defined endofunctor on the category of monoids of collections \(\operatorname{Mon}(\operatorname{Col})\). _
In fact, we can show a stronger result.
**Proposition 3.14**.: _The functor \(\mathfrak{s}:\operatorname{Col}\to\operatorname{Col}\) defines a lax monoidal functor. When restricted to the subcategory of reduced operads, it is strong monoidal._
Proof.: First, we need to define the structure maps of a lax monoidal functor. Namely, we define the unit morphism \(\varepsilon:I\to\mathfrak{s}I\) to be the map \(\varepsilon(n):I(n)\to I(n)\otimes S^{n-1}R\) as the identity for \(n\neq 1\) and the isomorphism \(R\cong R\otimes R\) for \(n=1\). We also need to define a natural transformation \(\mu:\mathfrak{s}\mathcal{O}\circ\mathfrak{s}\mathcal{P}\to\mathfrak{s}( \mathcal{O}\circ\mathcal{P})\). To define it, observe that for \(\mathcal{P}=\mathcal{O}\) we would want the map
\[\mathfrak{s}\mathcal{O}\circ\mathfrak{s}\mathcal{O}\xrightarrow{\mu} \mathfrak{s}(\mathcal{O}\circ\mathcal{O})\xrightarrow{\mathfrak{s}\gamma} \mathfrak{s}\mathcal{O}\]
to coincide with the operadic composition \(\tilde{\gamma}\) on \(\mathfrak{s}\mathcal{O}\), where \(\gamma\) is the composition on \(\mathcal{O}\).
We know that \(\mathfrak{s}\gamma\) does not add any signs. Therefore, if \(\tilde{\gamma}=(-1)^{\eta}\gamma\), with \(\eta\) explicitly computed in Proposition 4.3, the sign must come entirely from the map \(\mathfrak{s}\mathcal{O}\circ\mathfrak{s}\mathcal{O}\to\mathfrak{s}( \mathcal{O}\circ\mathcal{O})\). Thus, we define the map
\[\mu:\mathfrak{s}\mathcal{O}\circ\mathfrak{s}\mathcal{P}\to\mathfrak{s}( \mathcal{O}\circ\mathcal{P})\]
as the map given by
\[x\otimes e^{N}\otimes x_{1}\otimes e^{a_{1}}\otimes\cdots\otimes x_{N} \otimes e^{a_{N}}\mapsto(-1)^{\eta}x\otimes x_{1}\otimes\cdots\otimes x_{N} \otimes e^{n},\]
where \(a_{1}+\cdots+a_{N}=n\) and
\[\eta=\sum_{j<l}a_{j}\deg(b_{l})+\sum_{j=1}^{N}(a_{j}+\deg(b_{j})-1)(N-j),\]
which is the case \(k_{0}=\cdots=k_{n}=0\) in Proposition 4.3. Note that \((-1)^{\eta}\) only depends on degrees and arities, so the map is well defined. Another way to obtain this map is using the associativity isomorphisms and operadic composition on \(\Lambda\) to obtain a map \(\mathfrak{s}\mathcal{O}\circ\mathfrak{s}\mathcal{P}\to\mathfrak{s}( \mathcal{O}\circ\mathcal{P})\).
We now show that \(\mu\) is natural, or in other words, for \(f:\mathcal{O}\to\mathcal{O}^{\prime}\) and \(g:\mathcal{P}\to\mathcal{P}^{\prime}\), we show that the following diagram commutes.
Let \(c=x\otimes e^{N}\otimes x_{1}\otimes e^{a_{1}}\otimes\cdots\otimes x_{N} \otimes e^{a_{N}}\in\mathfrak{s}\mathcal{O}\circ\mathfrak{s}\mathcal{P}\) and let us compute \(\mathfrak{s}(f\circ g)(\mu(c))\). One has
\[\mathfrak{s}(f\circ g)(\mu(c)) =\mathfrak{s}(f\circ g)((-1)^{\sum_{j<l}a_{j}\deg(x_{l})+\sum_{j=1} ^{N}(a_{j}+\deg(x_{j})-1)(N-j)}x\otimes x_{1}\otimes\cdots\otimes x_{N}\otimes e ^{n})\] \[=(-1)^{\nu}f(x)\otimes g(x_{1})\otimes\cdots\otimes g(x_{N}) \otimes e^{n}\]
where
\[\nu=\sum_{j<l}a_{j}\deg(x_{l})+\sum_{j=1}^{N}(\deg(x_{j})+a_{j}-1)(N-j)+N\deg( g)\deg(x)+\deg(g)\sum_{j=1}^{N}\deg(x_{j})(N-j).\]
Now let us compute \(\mu((\mathfrak{s}f\circ\mathfrak{s}g)(c))\). We have
\[\mu((\mathfrak{s}f\circ\mathfrak{s}g)(c))=(-1)^{\sigma}f(x)\otimes g(x_{1}) \otimes\cdots\otimes g(x_{N})\otimes e^{n},\]
where
\[\sigma=N\deg(g)(\deg(x)+N-1)+\deg(g)\sum_{j=1}^{N}(\deg(x_{j})+a_{j}-1)(N-j)+\]
\[\sum_{j<l}a_{j}(\deg(x_{j})+\deg(g))+\sum_{j=1}^{N}(a_{j}+\deg(x_{j})+\deg(g) -1)(N-j).\]
Now we compare the two signs by computing \(\nu+\sigma\mod 2\). After some cancellations of common terms and using that \(N(N-1)=0\mod 2\) we get
\[\deg(g)\sum_{j=1}^{N}(a_{j}-1)(N-j)+\sum_{j<l}a_{j}\deg(g)+\sum_{j=1}^{N}\deg( g)(N-j)=\]
\[\deg(g)\sum_{j=1}^{N}a_{j}(N-j)+\deg(g)\sum_{j<l}a_{j}=\]
\[\deg(g)\left(\sum_{j=1}^{N}a_{j}(N-j)+\sum_{j=1}^{N}a_{j}(N-j)\right)=0\mod 2.\]
This shows naturality of \(\mu\). Unitality follows directly from the definitions by direct computation. In the case of associativity, observe that by the definition of \(\mu\), the associativity axiom for \(\mu\) is equivalent to the associativity of the operadic composition \(\tilde{\gamma}\), which we know to be true. This shows that \(\mathfrak{s}\) is a lax monoidal functor.
In the case where the operads have trivial arity \(0\) component, we may define an inverse to the operadic composition on \(\Lambda\) from Section 3. Namely, for \(n>0\), we may define
\[\Lambda(n)\to\bigoplus_{N\geq 0}\Lambda(N)\otimes\left(\bigoplus_{a_{1}+ \cdots+a_{N}=n}\Lambda(a_{1})\otimes\cdots\otimes\Lambda(a_{N})\right)\]
as the map
\[e^{n}\mapsto\sum_{a_{1}+\cdots+a_{N}=n}(-1)^{\delta}e^{N}\otimes e^{a_{1}} \otimes\cdots\otimes e^{a_{N}},\]
where \(\delta\) is just the same sign that shows up in the operadic composition on \(\Lambda\) (see Proposition 4.3) and \(a_{1},\ldots,a_{k}>0\). Since there are only finitely many ways of decomposing
\(n\) into \(N\) positive integers, the sum is finite and thus the map is well defined. In fact, this map defines a cooperad structure on the reduced sub-operad of \(\Lambda\) with trivial arity \(0\) component. This map induces the morphism \(\mu^{-1}:\mathfrak{s}(\mathcal{O}\circ\mathcal{P})\to\mathfrak{s}\mathcal{O} \circ\mathfrak{s}\mathcal{P}\) that we are looking for.
The unit morphism \(\varepsilon\) is always an isomorphism, so this shows \(\mathfrak{s}\) is strong monoidal in the reduced case.
_Remark 3.15_.: If we decide to work with symmetric operads, we just need to introduce the sign action of the symmetric group on \(\Lambda(n)\), turning it into the sign representation of the symmetric group. The action on tensor products is diagonal, and the results we have obtained follow similarly replacing \(\operatorname{Col}\) by the category of \(\mathbb{S}\)-modules.
## 4. Brace algebras
Brace algebras appear naturally in the context of operads when we fix the first argument of operadic composition [10]. This simple idea gives rise to a very rich structure that is the building block of the derived \(A_{\infty}\)-structures that we are going to construct.
In this section we define a brace algebra structure for an arbitrary operad using operadic suspension. The use of operadic suspension will have as a result a generalization of the Lie bracket defined in [14]. First recall the definition of a brace algebra.
**Definition 4.1**.: _A brace algebra on a graded module \(A\) consists of a family of maps_
\[b_{n}:A^{\otimes 1+n}\to A\]
_called braces, that we evaluate on \((x,x_{1},\ldots,x_{n})\) as \(b_{n}(x;x_{1},\ldots,x_{n})\). They must satisfy the brace relation_
\[b_{m}(b_{n}(x;x_{1},\ldots,x_{n});y_{1},\ldots,y_{m})=\] \[\sum_{\begin{subarray}{c}i_{1},\ldots,i_{n}\\ j_{1}\ldots,j_{n}\end{subarray}}(-1)^{\varepsilon}b_{l}(x;y_{1},\ldots,y_{i_{ 1}},b_{j_{1}}(x_{1};y_{i_{1}+1},\ldots,y_{i_{1}+j_{1}}),\ldots,b_{j_{n}}(x_{n} ;y_{i_{n}+1},\ldots,y_{i_{n}+j_{n}}),\ldots,y_{m})\]
_where \(l=n+\sum_{p=1}^{n}i_{p}\) and \(\varepsilon=\sum_{p=1}^{n}\deg(x_{p})\sum_{q=1}^{i_{p}}\deg(y_{q})\), i.e. the sign is picked up by the \(x_{i}\)'s passing by the \(y_{i}\)'s in the shuffle._
_Remark 4.2_.: Some authors might use the notation \(b_{1+n}\) instead of \(b_{n}\), but the first element is usually going to have a different role from the others, so we found \(b_{n}\) more intuitive. A shorter notation for \(b_{n}(x;x_{1},\ldots,x_{n})\) found in the literature ([10], [11]) is \(x\{x_{1},\ldots,x_{n}\}\).
We will also see a bigraded version of this kind of map in Section 4.
### Brace algebra structure on an operad
Given an operad \(\mathcal{O}\) with composition map \(\gamma:\mathcal{O}\circ\mathcal{O}\to\mathcal{O}\) we can define a brace algebra on the underlying module of \(\mathcal{O}\) by setting
\[b_{n}:\mathcal{O}(N)\otimes\mathcal{O}(a_{1})\otimes\cdots\otimes\mathcal{O}(a _{n})\to\mathcal{O}(N-n+\sum a_{i})\]
\[b_{n}(x;x_{1},\ldots,x_{n})=\sum\gamma(x;1,\ldots,1,x_{1},1,\ldots,1,x_{n},1, \ldots,1),\]
where the sum runs over all possible order-preserving insertions. The brace \(b_{n}(x;x_{1},\ldots,x_{n})\) vanishes whenever \(n>N\) and \(b_{0}(x)=x\). The brace relation follows from the associativity axiom of operads.
This construction can be used to define braces on \(\mathfrak{s}\mathcal{O}\). More precisely, we define maps
\[b_{n}:\mathfrak{s}\mathcal{O}(N)\otimes\mathfrak{s}\mathcal{O}(a_{1})\otimes \cdots\otimes\mathfrak{s}\mathcal{O}(a_{n})\to\mathfrak{s}\mathcal{O}(N-n+ \sum a_{i})\]
using the operadic composition \(\tilde{\gamma}\) on \(\mathfrak{s}\mathcal{O}\) as
\[b_{n}(x;x_{1},\ldots,x_{n})=\sum\tilde{\gamma}(x;1,\ldots,1,x_{1},1,\ldots,1,x _{n},1,\ldots,1).\]
We have the following relation between the brace maps \(b_{n}\) defined on \(\mathfrak{s}\mathcal{O}\) and the operadic composition \(\gamma\) on \(\mathcal{O}\).
**Proposition 4.3**.: _For \(x\in\mathfrak{s}\mathcal{O}(N)\) and \(x_{i}\in\mathfrak{s}\mathcal{O}(a_{i})\) of internal degree \(q_{i}\) (\(1\leq i\leq n\)), we have_
\[b_{n}(x;x_{1},\ldots,x_{n})=\sum_{N-n=k_{0}+\cdots+k_{n}}(-1)^{n}\gamma(x \otimes 1^{\otimes k_{0}}\otimes x_{1}\otimes\cdots\otimes x_{n}\otimes 1^{ \otimes k_{n}}),\]
_where_
\[\eta=\sum_{0\leq j<l\leq n}k_{j}q_{l}+\sum_{1\leq j<l\leq n}a_{j}q_{l}+\sum_{j =1}^{n}(a_{j}+q_{j}-1)(n-j)+\sum_{1\leq j\leq l\leq n}(a_{j}+q_{j}-1)k_{l}.\]
Proof.: To obtain the signs that make \(\tilde{\gamma}\) differ from \(\gamma\), we must first look at the operadic composition on \(\Lambda\). We are interested in compositions of the form
\[\tilde{\gamma}(x\otimes 1^{\otimes k_{0}}\otimes x_{1}\otimes 1^{\otimes k_{1}} \otimes\cdots\otimes x_{n}\otimes 1^{\otimes k_{n}})\]
where \(N-n=k_{0}+\cdots+k_{n}\), \(x\) has arity \(N\) and each \(x_{i}\) has arity \(a_{i}\) and internal degree \(q_{i}\). Therefore, let us consider the corresponding operadic composition
\[\Lambda(N)\otimes\Lambda(1)^{k_{0}}\otimes\Lambda(a_{1})\otimes\Lambda(1)^{ \otimes k_{1}}\otimes\cdots\otimes\Lambda(a_{n})\otimes\Lambda(1)^{k_{n}} \rTo\Lambda(N-n+\sum_{i=1}^{n}a_{i}).\]
The operadic composition can be described in terms of insertions in the obvious way, namely, if \(f\in\mathfrak{s}\mathcal{O}(N)\) and \(h_{1},\ldots,h_{N}\in\mathfrak{s}\mathcal{O}\), then we have
\[\tilde{\gamma}(x;y_{1},\ldots,y_{N})=(\cdots(x\tilde{\circ}_{1}y_{1})\tilde{ \circ}_{1+a(y_{1})}y_{2}\cdots)\tilde{\circ}_{1+\sum a(y_{p})}y_{N},\]
where \(a(y_{p})\) is the arity of \(y_{p}\) (in this case \(y_{p}\) is either \(1\) or some \(x_{i}\)). So we just have to find out the sign iterating the same argument as in the \(i\)-th insertion. In this case, each \(\Lambda(a_{i})\) produces a sign given by the exponent
\[(a_{i}-1)(N-k_{0}+\cdots-k_{i-1}-i).\]
For this, recall that the degree of \(\Lambda(a_{i})\) is \(a_{i}-1\) and that the generator of this space is inserted in the position \(1+\sum_{j=0}^{i-1}k_{j}+\sum_{j=1}^{i-1}a_{j}\) of a wedge of \(N+\sum_{j=1}^{i-1}a_{j}-i+1\) generators. Therefore, performing this insertion as described in the previous section yields the aforementioned sign. Now, since \(N-n=k_{0}+\cdots+k_{n}\), we have that
\[(a_{i}-1)(N-k_{0}+\cdots+k_{i-1}-i)=(a_{i}-1)(n-i+\sum_{l=i}^{n}k_{l}).\]
Now we can compute the sign factor of a brace. For this, notice that the isomorphism \((\mathcal{O}(1)\otimes\Lambda(1))^{\otimes k}\cong\mathcal{O}(1)^{\otimes k} \otimes\Lambda(1)^{\otimes k}\) does not produce any signs because of degree reasons. Therefore, the sign coming from the isomorphism
\[\mathcal{O}(N)\otimes\Lambda(N)\otimes(\mathcal{O}(1)\otimes\Lambda(1))^{ \otimes k_{0}}\otimes\bigotimes_{i=1}^{n}(\mathcal{O}(a_{i})\otimes\Lambda(a_ {i})\otimes(\mathcal{O}(1)\otimes\Lambda(1))^{\otimes k_{i}}\cong\]
\[\mathcal{O}(N)\otimes\mathcal{O}(1)^{\otimes k_{0}}\otimes(\bigotimes_{i=1}^ {n}\mathcal{O}(a_{i})\otimes\mathcal{O}(1)^{\otimes k_{i}})\otimes\Lambda(N) \otimes\Lambda(1)^{\otimes k_{0}}\otimes(\bigotimes_{i=1}^{n}\Lambda(a_{i}) \otimes\Lambda(1)^{\otimes k_{i}})\]
is determined by the exponent
\[(N-1)\sum_{i=1}^{n}q_{i}+\sum_{i=1}^{n}(a_{i}-1)\sum_{l>i}q_{l}.\]
This equals
\[\left(\sum_{j=0}^{n}k_{j}+n-1\right)\sum_{i=1}^{n}q_{i}+\sum_{i=1}^{n}(a_{i}- 1)\sum_{l>i}q_{l}.\]
After doing the operadic composition
\[\mathcal{O}(N)\otimes(\bigotimes_{i=1}^{n}\mathcal{O}(a_{i}))\otimes\Lambda (N)\otimes(\bigotimes_{i=1}^{n}\Lambda(a_{i}))\longrightarrow\mathcal{O}(N-n +\sum_{i=1}^{n}a_{i})\otimes\Lambda(N-n+\sum_{i=1}^{n}a_{i})\]
we can add the sign coming from the suspension, so all in all the sign \((-1)^{\eta}\) we were looking for is given by
\[\eta=\sum_{i=1}^{n}(a_{i}-1)(n-i+\sum_{l=i}^{n}k_{l})+(\sum_{j=0}^{n}k_{j}+n-1 )\sum_{i=1}^{n}q_{i}+\sum_{i=1}^{n}(a_{i}-1)\sum_{l>i}q_{l}.\]
It can be checked that this can be rewritten modulo \(2\) as
\[\eta=\sum_{0\leq j<l\leq n}k_{j}q_{l}+\sum_{1\leq j<l\leq n}a_{j}q_{l}+\sum_{ j=1}^{n}(a_{j}+q_{j}-1)(n-j)+\sum_{1\leq j\leq l\leq n}(a_{j}+q_{j}-1)k_{l}\]
as we stated.
Notice that for \(\mathcal{O}=\operatorname{End}_{A}\), the brace on operadic suspension is precisely
\[b_{n}(f;g_{1},\dots,g_{n})=\sum(-1)^{\eta}f(1,\dots,1,g_{1},1,\dots,1,g_{n},1, \dots,1).\]
Using the brace structure on \(\mathfrak{s}\operatorname{End}_{A}\), the sign \(\eta\) gives us in particular the the same sign of the Lie bracket defined in [10]. More precisely, we have the following.
**Corollary 4.4**.: _The brace \(b_{1}(f;g)\) is the operation \(f\circ g\) defined in [10] that induces a Lie bracket on the Hochschild complex of an \(A_{\infty}\)-algebra via_
\[[f,g]=b_{1}(f;g)-(-1)^{|f||g|}b_{1}(g;f).\]
For this reason may use the notation \(f\bar{\circ}g\) instead of \(b_{1}(f;g)\), keeping the notation \(f\circ g\) whenever the insertion maps are denoted by \(\circ_{i}\).
In [10], the sign is computed using a strategy that we generalize in Appendix C. The approach we have followed here has the advantage that the brace relation follows immediately from the associativity axiom of operadic composition. This approach also works for any operad since the difference between \(\gamma\) and \(\tilde{\gamma}\) is going to be the same sign.
### Reinterpretation of \(\infty\)-morphisms
As we mentioned before, we can show an alternative description of \(\infty\)-morphisms of \(A_{\infty}\)-algebras and their composition in terms of suspension of collections, recall Definition 2.3 for the definition of these morphisms.
Defining the suspension \(\mathfrak{s}\) at the level of collections as we did in Section 3.1 allows us to talk about \(\infty\)-morphisms of \(A_{\infty}\)-algebras in this setting, since they live in collections of the form
\[\operatorname{End}_{B}^{A}=\{\operatorname{Hom}(A^{\otimes n},B)\}_{n\geq 1}.\]
More precisely, there is a left module structure on \(\operatorname{End}_{B}^{A}\) over the operad \(\operatorname{End}_{B}\)
\[\operatorname{End}_{B}\circ\operatorname{End}_{B}^{A}\to\operatorname{End}_{B} ^{A}\]
given by compostion of maps
\[f\otimes g_{1}\otimes\dots\otimes g_{n}\mapsto f(g_{1}\otimes\dots\otimes g_{ n})\]
for \(f\in\operatorname{End}_{B}(n)\) and \(g_{i}\in\operatorname{End}_{B}^{A}\), and also an infinitesimal right module structure over the operad \(\operatorname{End}_{A}\)
\[\operatorname{End}_{B}^{A}\circ_{(1)}\operatorname{End}_{A}\to\operatorname{ End}_{B}^{A}\]
given by insertion of maps
\[f\otimes 1^{\otimes r}\otimes g\otimes 1^{\otimes n-r-1}\mapsto f(1^{\otimes r }\otimes g\otimes 1^{\otimes n-r-1})\]
for \(f\in\operatorname{End}_{B}^{A}(n)\) and \(g\in\operatorname{End}_{A}\). In addition, we have a composition \(\operatorname{End}_{C}^{B}\circ\operatorname{End}_{B}^{A}\to\operatorname{ End}_{C}^{A}\) analogous to the left module described above. They induce maps on the respective operadic
suspensions which differ from the original ones by some signs that can be calculated in an analogous way to what we do on Proposition 4.3. These induced maps will give us the characterization of \(\infty\)-morphisms in Lemma 4.5.
For these collections we also have \(\mathfrak{s}^{-1}\operatorname{End}_{B}^{A}\cong\operatorname{End}_{SB}^{SA}\) in analogy with Theorem 3.9, and the proof is similar but shorter since we do not need to worry about insertions.
**Lemma 4.5**.: _An \(\infty\)-morphism of \(A_{\infty}\)-algebras \(A\to B\) with respective structure maps \(m^{A}\) and \(m^{B}\) is equivalent to an element \(f\in\mathfrak{s}\operatorname{End}_{B}^{A}\) of degree 0 concentrated in positive arity such that_
\[\rho(f\circ_{(1)}m^{A})=\lambda(m^{B}\circ f), \tag{11}\]
_where_
\[\lambda:\mathfrak{s}\operatorname{End}_{B}\mathfrak{s}\operatorname{End}_{B}^ {A}\to\mathfrak{s}\operatorname{End}_{B}^{A}\]
_is induced by the left module structure on \(\operatorname{End}_{B}^{A}\) and_
\[\rho:\mathfrak{s}\operatorname{End}_{B}\circ_{(1)}\mathfrak{s}\operatorname{ End}_{B}^{A}\to\mathfrak{s}\operatorname{End}_{B}^{A}\]
_is induced by the right infinitesimal module structure on \(\operatorname{End}_{B}^{A}\)._
_In addition, the composition of \(\infty\)-morphisms is given by the natural composition_
\[\mathfrak{s}\operatorname{End}_{C}^{B}\mathfrak{s}\operatorname{End}_{B}^{A} \to\mathfrak{s}\operatorname{End}_{C}^{A}.\]
Proof.: From the definitions of the operations in Equation (11), we know that this equation coincides with the one defining \(\infty\)-morphisms of \(A_{\infty}\)-algebras (Definition 2.3) up to sign. The signs that appear in the above equation are obtained in a similar way to that on \(\tilde{\gamma}\), see the proof of Proposition 4.3. Thus, it is enough to plug into the sign \(\eta\) from Proposition 4.3 the corresponding degrees and arities to obtain the desired result. The composition of \(\infty\)-morphisms follows similarly.
Notice the similarity between this definition and the definitions given in [12, SS10.2.4] taking into account the minor modifications to accommodate the dg-case.
In the case that \(f:A\to A\) is an \(\infty\)-endomorphism, Equation (11) can be written in terms of operadic composition as \(f\tilde{\gamma}m=\tilde{\gamma}(m\circ f)\).
## 5. \(A_{\infty}\)-algebra structures on operads
In this section we use the previously described brace structures to prove Theorem 5.7, which was originally claimed by Gerstenhaber and Voronov [10]. This leads us to our first new version of the Deligne conjecture, that we prove in Corollary 5.12.
Let \(\mathcal{O}\) be an operad of graded \(R\)-modules and \(\mathfrak{s}\mathcal{O}\) its operadic suspension. Let us consider the underlying graded module of the operad \(\mathfrak{s}\mathcal{O}\), which we call \(\mathfrak{s}\mathcal{O}\) again by
abuse of notation, i.e. \(\operatorname{\mathfrak{s}\mathcal{O}}=\prod_{n}\operatorname{\mathfrak{s} \mathcal{O}}(n)\) with grading given by its natural degree, i.e. \(|x|=n+\deg(x)-1\) for \(x\in\operatorname{\mathfrak{s}\mathcal{O}}(n)\), where \(\deg(x)\) is its internal degree.
Following [10] and [11], if we have an \(A_{\infty}\) multiplication \(m\in\mathcal{O}\), one would define an \(A_{\infty}\)-algebra structure on \(\operatorname{\mathfrak{s}\mathcal{O}}\) using the maps
\[M^{\prime}_{1}(x) \coloneqq[m,x]=m\tilde{\circ}x-(-1)^{|x|}x\tilde{\circ}m,\] \[M^{\prime}_{j}(x_{1},\dots,x_{j}) \coloneqq b_{j}(m;x_{1},\dots,x_{j}), j>1.\]
The prime notation here is used to indicate that these are not the definitive maps that we are going to take. Getzler shows in [11] that \(M^{\prime}=M^{\prime}_{1}+M^{\prime}_{2}+\cdots\) satisfies the relation \(M^{\prime}\circ M^{\prime}=0\) using that \(m\circ m=0\), and the proof is independent of the operad in which \(m\) is defined, so it is still valid if \(m\tilde{\circ}m=0\). But we have two problems here. The equation \(M^{\prime}\circ M^{\prime}=0\) does depend on how the circle operation is defined. More precisely, this circle operation in [11] is the natural circle operation on the endomorphism operad, which does not have any additional signs, so \(M^{\prime}\) is not an \(A_{\infty}\)-structure under our convention. The other problem has to do with the degrees. We need \(M^{\prime}_{j}\) to be homogeneous of degree \(2-j\) as a map \(\operatorname{\mathfrak{s}\mathcal{O}}^{\otimes j}\to\operatorname{\mathfrak{s }\mathcal{O}}\), but we find that \(M^{\prime}_{j}\) is homogeneous of degree \(1\) instead, as the following lemma shows.
**Lemma 5.1**.: _For \(x\in\operatorname{\mathfrak{s}\mathcal{O}}\) we have that the degree of the map \(b_{j}(x;-):\operatorname{\mathfrak{s}\mathcal{O}}^{\otimes j}\to\operatorname {\mathfrak{s}\mathcal{O}}\) of graded modules is precisely \(|x|\)._
Proof.: Let \(a(x)\) denote the arity of \(x\), i.e. \(a(x)=n\) whenever \(x\in\operatorname{\mathfrak{s}\mathcal{O}}(n)\). Also, let \(\deg(x)\) be its internal degree in \(\mathcal{O}\). The natural degree of \(b_{j}(x;x_{1},\dots,x_{j})\) for \(a(x)\geq j\) as an element of \(\operatorname{\mathfrak{s}\mathcal{O}}\) by definition is
\[|b_{j}(x;x_{1},\dots,x_{j})|=a(b_{j}(x;x_{1},\dots,x_{j}))+\deg(b_{j}(x;x_{1}, \dots,x_{j}))-1.\]
We have
\[a(b_{j}(x;x_{1},\dots,x_{j}))=a(x)-j+\sum_{i}a(x_{i})\]
and
\[\deg(b_{j}(x;x_{1},\dots,x_{j})=\deg(x)+\sum_{i}\deg(x_{i}),\]
so
\[a(b_{j}(x;x_{1},\dots,x_{j}))+\deg(b_{j}(x;x_{1},\dots,x_{j}))-1 =\] \[a(x)-j+\sum_{i}a(x_{i})+\deg(x)+\sum_{i}\deg(x_{i})-1 =\] \[a(x)+\deg(x)-1+\sum_{i}a(x_{i})+\sum_{i}\deg(x_{i})-j =\] \[a(x)+\deg(x)-1+\sum_{i}(a(x_{i})+\deg(x_{i})-1) =\] \[|x|+\sum_{i}|x_{i}|.\]
This means that the degree of the map \(b_{j}(x;-):\mathfrak{s}\mathcal{O}^{\otimes j}\to\mathfrak{s}\mathcal{O}\) equals \(|x|\).
**Corollary 5.2**.: _The maps_
\[M^{\prime}_{j}:\mathfrak{s}\mathcal{O}^{\otimes j}\to\mathfrak{s}\mathcal{O}, \,(x_{1},\dots,x_{j})\mapsto b_{j}(m;x_{1},\dots,x_{j})\]
_for \(j>1\) and the map_
\[M^{\prime}_{1}:\mathfrak{s}\mathcal{O}\to\mathfrak{s}\mathcal{O},\,x\mapsto b _{1}(m;x)-(-1)^{|x|}b_{1}(m;x)\]
_are homogeneous of degree 1._
Proof.: For \(j>1\) it is a direct consequence of Lemma 5.1. For \(j=1\) we have the summand \(b_{1}(m;x)\) whose degree follows as well from Lemma 5.1. The degree of the other summand, \(b_{1}(x;m)\), can be computed in a similar way as in the proof Lemma 5.1, giving that \(|b_{1}(x;m)|=1+|x|\). This concludes the proof.
The problem we have encountered with the degrees can be resolved using shift maps as the following proposition shows. Recall that we have shift maps \(A\to SA\) of degree 1 given by the identity.
**Proposition 5.3**.: _If \(\mathcal{O}\) is an operad with an \(A_{\infty}\)-multiplication \(m\in\mathcal{O}\), then there is an \(A_{\infty}\)-algebra structure on the shifted module \(\SS\mathcal{O}\)._
Proof.: Note in the proof of Lemma 5.1 that a way to turn \(M^{\prime}_{j}\) into a map of degree \(2-j\) is introducing a grading on \(\mathfrak{s}\mathcal{O}\) given by arity plus internal degree (without subtracting 1). This is equivalent to defining an \(A_{\infty}\)-algebra structure \(M\) on \(\SS\mathcal{O}\) shifting the map \(M^{\prime}=M^{\prime}_{1}+M^{\prime}_{2}+\cdots\), where \(S\) is the shift of graded modules. Therefore, we define \(M_{j}\) to be the map making the following diagram commute.
In other words, \(M_{j}=\overline{\sigma}(M_{j}^{\prime})\), where \(\overline{\sigma}(F)=S\circ F\circ(S^{\otimes n})^{-1}\) for \(F\in\operatorname{End}_{\mathfrak{s}\mathcal{O}}(n)\) is the map inducing an isomorphism \(\operatorname{End}_{\mathfrak{s}\mathcal{O}}\cong\mathfrak{s}\operatorname{ End}_{S\mathcal{O}}\), see Equation (9). Since \(\overline{\sigma}\) is an operad morphism, for \(M=M_{1}+M_{2}+\cdots\), we have
\[M\widehat{\circ}M=\overline{\sigma}(M^{\prime})\widehat{\circ}\overline{\sigma }(M^{\prime})=\overline{\sigma}(M^{\prime}\circ M^{\prime})=0.\]
So now we have that \(M\in\mathfrak{s}\operatorname{End}_{S\mathfrak{s}\mathcal{O}}\) is an element of natural degree \(1\) concentrated in positive arity such that \(M\widehat{\circ}M=0\). Therefore, in light of Remark 3.7, \(M\) is the desired \(A_{\infty}\)-algebra structure on \(S\mathfrak{s}\mathcal{O}\).
Notice that \(M\) is defined as an structure map on \(S\mathfrak{s}\mathcal{O}\). This kind of shifted operad is called _odd operad_ in [11]. This means that \(S\mathfrak{s}\mathcal{O}\) is not an operad anymore, since the associativity relation for graded operads involves signs that depend on the degrees, which are now shifted.
### Iterating the process
We have defined \(A_{\infty}\)-structure maps \(M_{j}\in\mathfrak{s}\operatorname{End}_{S\mathfrak{s}\mathcal{O}}\). Now we can use the brace structure of the operad \(\mathfrak{s}\operatorname{End}_{S\mathfrak{s}\mathcal{O}}\) to get a \(A_{\infty}\)-algebra structure given by maps
\[\overline{M}_{j}:(S\mathfrak{s}\operatorname{End}_{S\mathfrak{s}\mathcal{O}})^ {\otimes j}\to S\mathfrak{s}\operatorname{End}_{S\mathfrak{s}\mathcal{O}} \tag{12}\]
by applying \(\overline{\sigma}\) to maps
\[\overline{M}_{j}^{\prime}:(\mathfrak{s}\operatorname{End}_{S\mathfrak{s} \mathcal{O}})^{\otimes j}\to\mathfrak{s}\operatorname{End}_{S\mathfrak{s} \mathcal{O}}\]
defined as
\[\overline{M}_{j}^{\prime}(f_{1},\dots,f_{j})=\overline{B}_{j}(M;f_ {1},\dots,f_{j}) j>1,\] \[\overline{M}_{1}^{\prime}(f)=\overline{B}_{1}(M;f)-(-1)^{|f|} \overline{B}_{1}(f;M),\]
where \(\overline{B}_{j}\) denotes the brace map on \(\mathfrak{s}\operatorname{End}_{S\mathfrak{s}\mathcal{O}}\).
We define the Hochschild complex as done by Ward in [11].
**Definition 5.4**.: _The Hochschild cochains of a graded module \(A\) are defined to be the graded module \(S\mathfrak{s}\operatorname{End}_{A}\). If \((A,d)\) is a cochain complex, then \(S\mathfrak{s}\operatorname{End}_{A}\) is endowed with a differential_
\[\partial(f)=[d,f]=d\circ f-(-1)^{|f|}f\circ d\]
_where \(|f|\) is the natural degree of \(|f|\) and \(\circ\) is the plethysm operation given by insertions._
In particular, \(S\mathfrak{s}\operatorname{End}_{S\mathfrak{s}\mathcal{O}}\) is the module of Hochschild cochains of \(S\mathfrak{s}\mathcal{O}\). If \(\mathcal{O}\) has an \(A_{\infty}\)-multiplication, then the differential of the Hochschild complex is \(\overline{M}_{1}\) from Equation (12).
_Remark 5.5_.: The functor \(S\mathfrak{s}\) is called the "oddification" of an operad in the literature [12]. The reader might find odd to define the Hochschild complex in this way instead of just \(\operatorname{End}_{A}\). The reason is that operadic suspension provides the necessary signs and the
extra shift gives us the appropriate degrees. In addition, this definition allows the extra structure to arise naturally instead of having to define the signs by hand. For instance, if we have an associative multiplication \(m_{2}\in\operatorname{End}_{A}(2)=\operatorname{Hom}(A^{\otimes 2},A)\), the element \(m_{2}\) would not satisfy the equation \(m_{2}\circ m_{2}=0\) and thus cannot be used to induce a multiplication on \(\operatorname{End}_{A}\) as we did above.
A natural question to ask is what relation there is between the \(A_{\infty}\)-algebra structure on \(S\mathfrak{s}\mathcal{O}\) and the one on \(S\mathfrak{s}\operatorname{End}_{S\mathfrak{s}\mathcal{O}}\). In [10] it is claimed that given an operad \(\mathcal{O}\) with an \(A_{\infty}\)-multiplication, the map
\[\mathcal{O}\to\operatorname{End}_{\mathcal{O}},\,x\mapsto\sum_{n\geq 0}b_{n}(x;-)\]
is a morphism of \(A_{\infty}\)-algebras. In the associative case, this result leads to the definition of homotopy \(G\)-algebras, which connects with the classical Deligne conjecture. We are going to adapt the statement of this claim to our context and prove it. This way we will obtain an \(A_{\infty}\)-version of homotopy \(G\)-algebras and consequently an \(A_{\infty}\)-version of the Deligne conjecture. Let \(\Phi^{\prime}\) the map defined as above but on \(\mathfrak{s}\mathcal{O}\), i.e.
\[\Phi^{\prime}\colon\mathfrak{s}\mathcal{O}\to\operatorname{End}_{\mathfrak{s }\mathcal{O}},\,x\mapsto\sum_{n\geq 0}b_{n}(x;-).\]
Let \(\Phi:S\mathfrak{s}\mathcal{O}\to S\mathfrak{s}\operatorname{End}_{S \mathfrak{s}\mathcal{O}}\) the map making the following diagram commute
(13)
where the isomorphism \(\operatorname{End}_{\mathfrak{s}\mathcal{O}}\cong\mathfrak{s}\operatorname{End }_{S\mathfrak{s}\mathcal{O}}\) is given in Equation (9). Note that the degree of the map \(\Phi\) is zero.
_Remark 5.6_.: Notice that we have only used the operadic structure on \(\mathfrak{s}\mathcal{O}\) to define an \(A_{\infty}\)-algebra structure on \(S\mathfrak{s}\mathcal{O}\), so the constructions and results in these sections are valid if we replace \(\mathfrak{s}\mathcal{O}\) by any graded module \(A\) such that \(SA\) is an \(A_{\infty}\)-algebra.
**Theorem 5.7**.: _The map \(\Phi\) defined in diagram (13) above is a morphism of \(A_{\infty}\)-algebras, i.e. for all \(j\geq 1\) the equation_
\[\Phi(M_{j})=\overline{M}_{j}(\Phi^{\otimes j})\]
_holds, where the \(M_{j}\) is the \(j\)-th component of the \(A_{\infty}\)-algebra structure on \(S\mathfrak{s}\mathcal{O}\) and \(\overline{M}_{j}\) is the \(j\)-th component of the \(A_{\infty}\)-algebra structure on \(S\mathfrak{s}\operatorname{End}_{S\mathfrak{s}\mathcal{O}}\)._
Proof.: Let us have a look at the following diagram
(14)
where the diagonal red arrows are shifts of graded \(R\)-modules. We need to show that the diagram defined by the external black arrows commutes. But these arrows are defined so that they commute with the red and blue arrows, so it is enough to show that the inner blue diagram commutes. The blue diagram can be split into two different squares using the dashed arrow \(\mathcal{M}_{j}\) that we are going to define next, so it will be enough to show that the two squares commute.
The map \(\mathcal{M}_{j}:(\operatorname{End}_{\mathfrak{s}\mathcal{O}})^{\otimes j} \rightarrow\operatorname{End}_{\mathfrak{s}\mathcal{O}}\) is defined by
\[\mathcal{M}_{j}(f_{1},\dots,f_{j}) =B_{j}(M^{\prime};f_{1},\dots,f_{j}) \text{for }j>1,\] \[\mathcal{M}_{1}(f) =B_{1}(M^{\prime};f)-(-1)^{|f|}B_{1}(f;M^{\prime}),\]
where \(B_{j}\) is the natural brace structure map on the operad \(\operatorname{End}_{\mathfrak{s}\mathcal{O}}\), i.e. for \(f\in\operatorname{End}_{\mathfrak{s}\mathcal{O}}(n)\),
\[B_{j}(f;f_{1},\dots,f_{j}) =\sum_{k_{0}+\dots+k_{j}=n-j}f(1^{\otimes k_{0}}\otimes f_{1} \otimes 1^{\otimes k_{1}}\otimes\dots\otimes f_{j}\otimes 1^{\otimes k_{j}}).\]
The \(1\)'s in the braces are identity maps. In the above definition, \(|f|\) denotes the degree of \(f\) as an element of \(\operatorname{End}_{\mathfrak{s}\mathcal{O}}\), which is the same as the degree \(\overline{\sigma}(f)\in\mathfrak{s}\operatorname{End}_{\mathfrak{s}\mathcal{O}}\) because \(\overline{\sigma}\) is an isomorphism, as mentioned in Equation (9).
The inner square of diagram (14) is divided into two halves, so we divide the proof into two as well, showing the commutativity of each half independently.
**Commutativity of the right blue square.**
Let us show now that the right square commutes. Recall that \(\overline{\sigma}\) is an isomorphism of operads and \(M=\overline{\sigma}(M^{\prime})\). Then we have for \(j>1\)
\[\overline{M}^{\prime}_{j}(\overline{\sigma}(f_{1}),\dots,\overline{\sigma}(f_ {j}))=\overline{B}_{j}(M;\overline{\sigma}(f_{1}),\dots,\overline{\sigma}(f_ {j}))=\overline{B}_{j}(\overline{\sigma}(M^{\prime});\overline{\sigma}(f_{1}), \dots,\overline{\sigma}(f_{j})).\]
Now, since the brace structure is defined as an operadic composition, it commutes with \(\overline{\sigma}\), so
\[\overline{B}_{j}(\overline{\sigma}(M^{\prime});\overline{\sigma}(f_{1}), \dots,\overline{\sigma}(f_{j}))=\overline{\sigma}(B_{j}(M^{\prime};f_{1}, \dots,f_{j}))=\overline{\sigma}(\mathcal{M}_{j}(f_{1},\dots,f_{j})),\]
and therefore the right blue square commutes for \(j>1\). For \(j=1\) the result follows analogously.
The proof that the left blue square commutes consists of several lengthy calculations so we are going to devote the next section to that. However, it is worth noting that the commutativity of the left square does not depend on the particular operad \(\mathfrak{s}\mathcal{O}\), so it is still valid if \(m\) satisfies \(m\circ m=0\) for any circle operation defined in terms of insertions. This is essentially the original statement in [10].
### Commutativity of the left blue square
We are going to show here that the left blue square in diagram (14) commutes, i.e. that
\[\Phi^{\prime}(M^{\prime}_{j})=\mathcal{M}_{j}((\Phi^{\prime})^{\otimes j}) \tag{15}\]
for all \(j\geq 1\). First we prove the case \(j>1\). Let \(x_{1},\ldots,x_{j}\in\mathfrak{s}\mathcal{O}^{\otimes j}\). We have on the one hand
\[\Phi^{\prime}(M^{\prime}_{j}(x_{1},\ldots,x_{j})) =\Phi^{\prime}(b_{j}(m;x_{1},\ldots,x_{j}))=\sum_{n\geq 0}b_{n}(b_{j} (m;x_{1},\ldots,x_{j});-)\] \[=\sum_{n}\sum_{l}\sum b_{l}(m;-,b_{i_{1}}(x_{1};-),\cdots,b_{i_{ j}}(x_{j};-),-)\]
where \(l=n-(i_{1}+\cdots+i_{j})+j\). The sum with no subindex runs over all the possible order-preserving insertions. Note that \(l\geq j\). Evaluating the above map on elements would yield Koszul signs coming from the brace relation. Also recall from Lemma 5.1 that \(|b_{j}(x;-)|=|x|\). Now, fix some value of \(l\geq j\) and let us compute the \(M^{\prime}_{l}\) component of
\[\mathcal{M}_{j}(\Phi^{\prime}(x_{1}),\ldots,\Phi^{\prime}(x_{j}))=B_{j}(M^{ \prime};\Phi^{\prime}(x_{1}),\ldots,\Phi^{\prime}(x_{j}))\]
that is, \(B_{j}(M^{\prime}_{l};\Phi^{\prime}(x_{1}),\ldots,\Phi^{\prime}(x_{j}))\). By definition, this equals
\[\sum M^{\prime}_{l}(-,\Phi^{\prime}(x_{1}),\cdots,\Phi^{\prime}( x_{j}),-) =\sum_{i_{1},\ldots,i_{j}}\sum M^{\prime}_{l}(-,b_{i_{1}}(x_{1};-),\cdots,b_{i_{j}}(x_{j};-),-)\] \[=\sum_{i_{1},\ldots,i_{j}}\sum b_{l}(m;-,b_{i_{1}}(x_{1};-), \cdots,b_{i_{j}}(x_{j};-),-).\]
We are using hyphens instead of \(1\)'s to make the equality of both sides of the Equation (15) more apparent, and to make clear that when evaluating on elements those are the places where the elements go.
For each tuple \((i_{1},\ldots,i_{j})\) we can choose \(n\) such that \(n-(i_{1}+\cdots+i_{j})+j=l\), so the above sum equals
\[\sum_{\begin{subarray}{c}n,i_{1},\ldots,i_{j}\\ n-(i_{1}+\cdots+i_{j})+j=l\end{subarray}}b_{l}(m;-,b_{i_{1}}(x_{1};-),\cdots,b _{i_{j}}(x_{j};-),-).\]
So each \(M^{\prime}_{l}\) component for \(l\geq j\) produces precisely the terms \(b_{l}(m;\dots)\) appearing in \(\Phi^{\prime}(M^{\prime}_{j})\). Conversely, for every \(n\geq 0\) there exists some tuple \((i_{1},\dots,i_{j})\) and some \(l\geq j\) such that \(n\) is the that \(n-(i_{1}+\dots+i_{j})+j=l\), so we do get all the summands from the left hand side of Equation (15), and thus we have the equality \(\Phi^{\prime}(M^{\prime}_{j})=\mathcal{M}_{j}((\Phi^{\prime})^{\otimes j})\) for all \(j>1\).
It is worth treating the case \(n=0\) separately since in that case we have the summand \(b_{0}(b_{j}(m;x_{1},\dots,x_{j}))\) in \(\Phi^{\prime}(b_{j}(m;x_{1},\dots,x_{j}))\), where we cannot apply the brace relation. This summand is equal to
\[B_{j}(M^{\prime}_{j};b_{0}(x_{1}),\dots,b_{0}(x_{j}))=M^{\prime}_{j}(b_{0}(x_{ 1}),\dots,b_{0}(x_{j}))=b_{j}(m;b_{0}(x_{1}),\dots,b_{0}(x_{j})),\]
since by definition \(b_{0}(x)=x\).
Now we are going to show the case \(j=1\), that is
\[\Phi^{\prime}(M^{\prime}_{1}(x))=\mathcal{M}_{1}(\Phi^{\prime}(x)). \tag{16}\]
This is going to be divided into two parts, since \(M^{\prime}_{1}\) has two clearly distinct summands, one of them consisting of braces of the form \(b_{l}(m;\cdots)\) (insertions in \(m\)) and another one consisting of braces of the form \(b_{l}(x;\cdots)\) (insertions in \(x\)). We will therefore show that both types of braces cancel on each side of Equation (16).
#### Insertions in \(m\)
Let us first focus on the insertions in \(m\) that appear in Equation (16). Recall that
\[\Phi^{\prime}(M^{\prime}_{1}(x))=\Phi^{\prime}([m,x])=\Phi^{\prime}(b_{1}(m;x ))-(-1)^{|x|}\Phi^{\prime}(b_{1}(x;m)) \tag{17}\]
so we focus on the first summand
\[\Phi^{\prime}(b_{1}(m;x))= \sum_{n}b_{n}(b_{1}(m;x);-)=\sum_{n}\sum_{\begin{subarray}{c}i \\ n\geq i\end{subarray}}\sum b_{n-i+1}(m;-,b_{i}(x;-),-)\] \[= \sum_{\begin{subarray}{c}n,i\\ n-i+1>0\end{subarray}}\sum_{b_{n-i+1}(m;-,b_{i}(x;-),-)\]
where the sum with no indices runs over all the positions in which \(b_{i}(x;-)\) can be inserted (from \(1\) to \(n-i+1\) in this case).
On the other hand, since \(|\Phi^{\prime}(x)|=|x|\), the right hand side of Equation (16) becomes
\[\mathcal{M}_{1}(\Phi^{\prime}(x))=B_{1}(M^{\prime};\Phi^{\prime}(x))-(-1)^{|x |}B_{1}(\Phi^{\prime}(x);M^{\prime}). \tag{18}\]
Again, we are focusing now on the first summand, but with the exception of the part of \(M^{\prime}_{1}\) that corresponds to \(b_{1}(\Phi(x);m)\). From here the argument is a particular case of the proof for \(j>1\), so the terms of the form \(b_{l}(m;\cdots)\) are the same on both sides of the Equation (16).
_Insertions in \(x\)._ And now, let us study the insertions in \(x\) that appear in Equation (16). We will check that insertions in \(x\) from the left hand side and right hand side cancel. Let us look first at the left hand side. From \(\Phi^{\prime}(M_{1}^{\prime}(x))\) in Equation (17) we had
\[-(-1)^{|x|}\Phi^{\prime}(b_{1}(x;m))=-(-1)^{|x|}\sum_{n}b_{n}(b_{1}(x;m);-).\]
The factor \(-(-1)^{|x|}\) is going to appear everywhere, so we may cancel it. Thus we just have
\[\Phi^{\prime}(b_{1}(x;m))=\sum_{n}b_{n}(b_{1}(x;m);-).\]
We are going to evaluate each term of the sum, so let \(z_{1},\ldots,z_{n}\in\mathfrak{s}\mathcal{O}\). We have by the brace relation that
\[b_{n}(b_{1}(x;m);z_{1},\ldots,z_{n})=\sum_{l+j=n+1}\sum_{i=1}^{n -j+1}(-1)^{\varepsilon}b_{l}(x;z_{1},\ldots,b_{j}(m;z_{i},\ldots,z_{i+j}), \ldots,z_{n}) \tag{19}\] \[+\sum_{i=1}^{n+1}(-1)^{\varepsilon}b_{n+1}(x;z_{1},\ldots,z_{i- 1},m,z_{i},\ldots,z_{n}),\]
where \(\varepsilon\) is the usual Koszul sign with respect to the grading in \(\mathfrak{s}\mathcal{O}\). We have to check that the insertions in \(x\) that appear in \(\mathcal{M}_{1}(\Phi^{\prime}(x))\) (right hand side of the eq. (16)) are exactly those in Equation (19) above (left hand side of eq. (16)).
Therefore let us look at the right hand side of Equation (16). Here we will study the cancellations from each of the two summands that naturally appear. From Equation (18), i.e. \(\mathcal{M}_{1}(\Phi^{\prime}(x))=B_{1}(M^{\prime};\Phi^{\prime}(x))-(-1)^{| x|}B_{1}(\Phi^{\prime}(x);M^{\prime})\) we have
\[-(-1)^{|x|}b_{1}(\Phi^{\prime}(x);m)=-(-1)^{|x|}\sum_{n}b_{1}(b_{n}(x;-);m)\]
coming from the first summand since \(B_{1}(M_{1}^{\prime};\Phi^{\prime}(x))=M_{1}^{\prime}(\Phi^{\prime}(x))\). We are now only interested in insertions in \(x\). Again, cancelling \(-(-1)^{|x|}\) we get
\[b_{1}(\Phi^{\prime}(x);m)=\sum_{n}b_{1}(b_{n}(x;-);m).\]
Each term of the sum can be evaluated on \((z_{1},\ldots,z_{n})\) to produce
\[b_{1}(b_{n}(x;z_{1},\ldots,z_{n});m)=\] \[\sum_{i=1}^{n}(-1)^{\varepsilon+|z_{i}|}b_{n}(x;z_{1},\ldots,b_{ 1}(z_{i};m),\ldots,z_{n})+\sum_{i=1}^{n+1}(-1)^{\varepsilon}b_{n+1}(x;z_{1}, \ldots,z_{i-1},m,z_{i},\ldots,z_{n}) \tag{20}\]
Note that we have to apply the Koszul sign rule twice: once at evaluation, and once more to apply the brace relation. Now, from the second summand of \(\mathcal{M}_{1}(\Phi^{\prime}(x))\) in the right hand side of eq. (18), after cancelling \(-(-1)^{|x|}\) we obtain
\[B_{1}(\Phi^{\prime}(x);M^{\prime})= \sum_{l}B_{1}(b_{l}(x;-);M^{\prime})=\sum_{l}\sum b_{l}(x;-,M^{ \prime},-)\] \[= \left(\sum_{j>1}\sum_{l}\sum b_{l}(x;-,b_{j}(m;-),-)+\sum_{l}\sum b _{l}(x;-,b_{1}(-;m),-)\right).\]
We are going to evaluate on \((z_{1},\ldots,z_{n})\) to make this map more explicit, giving us
\[\sum_{l+j=n+1}\ \sum_{i=1}^{n-j+1}(-1)^{\varepsilon}b_{l}(x;z_{1}, \ldots,b_{j}(m;z_{i},\ldots,z_{i+j}),\ldots,z_{n})\] \[\qquad\qquad-\sum_{i=1}^{n}(-1)^{\varepsilon+|z_{i}|}b_{n}(x;z_{ 1},\ldots,b_{1}(z_{i};m),\ldots,z_{n}). \tag{21}\]
The minus sign comes from the fact that \(b_{1}(z_{i};m)\) comes from \(M^{\prime}_{1}(z_{i})\), so we apply the signs in the definition of \(M^{\prime}_{1}(z_{i})\). We therefore have that the right hand side of eq. (18) is the result of adding equations (20) and (21). After this addition we can see that the first sum of eq. (20) cancels the second sum of eq. (21).
We also have that the second sum in eq. (20) is the same as the second sum in eq. (19), so we are left with only the first sum of eq. (21). This is the same as the first sum in eq. (19), so we have already checked that the equation \(\Phi^{\prime}(M^{\prime}_{1})=\mathcal{M}_{1}(\Phi^{\prime})\) holds.
In the case \(n=0\), we have to note that \(B_{1}(b_{0}(x);m)\) vanishes because of arity reasons: \(b_{0}(x)\) is a map of arity \(0\), so we cannot insert any inputs. And this finishes the proof.
### Explicit \(A_{\infty}\)-algebra structure and Deligne conjecture
We have given an implicit definition of the components of the \(A_{\infty}\)-algebra structure on \(S\mathfrak{s}\mathcal{O}\), namely,
\[M_{j}=\overline{\sigma}(M^{\prime}_{j})=(-1)^{\binom{j}{2}}S\circ M^{\prime}_ {j}\circ(S^{-1})^{\otimes j},\]
but it is useful to have an explicit expression that determines how it is evaluated on elements of \(S\mathfrak{s}\mathcal{O}\). We will need these explicit expressions to describe \(J\)-algebras, which are \(A_{\infty}\)-version of homotopy \(G\)-algebras. This way we can state the \(A_{\infty}\)-Deligne conjecture in a more precise way. This explicit formulas will also clear up the connection with the work of Gerstenhaber and Voronov. We hope that these explicit expression can be useful to perform calculations in other mathematical contexts where \(A_{\infty}\)-algebras are used.
**Lemma 5.8**.: _For \(x,x_{1},\ldots,x_{n}\in\mathfrak{s}\mathcal{O}\), we have the following expressions._
\[M_{n}(Sx_{1},\ldots,Sx_{n})=(-1)^{\sum_{i=1}^{n}(n-i)|x_{i}|}Sb_{ n}(m;x_{1},\ldots,x_{n})\qquad\qquad n>1,\] \[M_{1}(Sx)=Sb_{1}(m;x)-(-1)^{|x|}Sb_{1}(x;m).\]
_Here \(|x|\) is the degree of \(x\) as an element of \(\mathfrak{s}\mathcal{O}\), i.e. the natural degree._
Proof.: The deduction of these explicit formulas is done as follows. Let \(n>1\) and \(x_{1},\ldots,x_{n}\in\mathsf{s}\mathcal{O}\). Then
\[M_{n}(Sx_{1},\ldots,Sx_{n}) =SM_{n}^{\prime}((S^{\otimes n})^{-1})(Sx_{1},\ldots,Sx_{n})\] \[=(-1)^{\binom{n}{2}}SM_{n}^{\prime}((S^{-1})^{\otimes n})(Sx_{1}, \ldots,Sx_{n})\] \[=(-1)^{\binom{n}{2}+\sum_{i=1}^{n}(n-i)(|x_{i}|+1)}SM_{n}^{\prime }(S^{-1}Sx_{1},\ldots,S^{-1}Sx_{n})\] \[=(-1)^{\binom{n}{2}+\sum_{i=1}^{n}(n-i)(|x_{i}|+1)}SM_{n}^{\prime }(x_{1},\ldots,x_{n})\]
Now, note that \(\binom{n}{2}\) is even exactly when \(n\equiv 0,1\mod 4\). In these cases, an even amount of \(|x_{i}|\)'s have an odd coefficient in the sum (when \(n\equiv 0\mod 4\) these are the \(|x_{i}|\) with even index, and when \(n\equiv 1\mod 4\), the \(|x_{i}|\) with odd index). This means that \(1\) is added on the exponent an even number of times, so the sign is not changed by the binomial coefficient nor by adding \(1\) on each term. Similarly, when \(\binom{n}{2}\) is odd, i.e. when \(n\equiv 2,3\mod 4\), there is an odd number of \(|x_{i}|\) with odd coefficient, so the addition of \(1\) an odd number of times cancels the binomial coefficient. This means that the above expression equals \((-1)^{\sum_{i=1}^{n}(n-i)|x_{i}|}SM_{n}^{\prime}(x_{1},\ldots,x_{n})\), which by definition equals
\[(-1)^{\sum_{i=1}^{n}(n-i)|x_{i}|}Sb_{n}(m;x_{1},\ldots,x_{n}).\]
The case \(n=1\) is analogous since \(\overline{\sigma}\) is linear.
It is possible to show that the maps defined explicitly as we have just done satisfy the \(A_{\infty}\)-equation without relying on the fact that \(\overline{\sigma}\) is a map of operads, but it is a lengthy and tedious calculation.
_Remark 5.9_.: In the case \(n=2\), omitting the shift symbols by abuse of notation, we obtain
\[M_{2}(x,y)=(-1)^{|x|}b_{2}(m;x,y).\]
Let \(M_{2}^{GV}\) be the product defined in [10] as
\[M_{2}^{GV}(x,y)=(-1)^{|x|+1}b_{2}(m;x,y).\]
We see that \(M_{2}=-M_{2}^{GV}\). Since the authors of [10] work in the associative case \(m=m_{2}\), this minus sign does not affect the \(A_{\infty}\)-relation, which in this case reduces to the associativity and differential relations. This difference in sign can be explained by the difference between \((S^{\otimes n})^{-1}\) and \((S^{-1})^{\otimes n}\), since any of these maps can be used to define a map \((S\mathsf{s}\mathcal{O})^{\otimes n}\to\mathsf{s}\mathcal{O}^{\otimes n}\).
Now that we have the explicit formulas for the \(A_{\infty}\)-structure on \(S\mathsf{s}\mathcal{O}\) we can state and prove an \(A_{\infty}\)-version of the Deligne conjecture. Let us first re-adapt the definition of homotopy \(G\)-algebra from [10, Definition 2] to our conventions.
**Definition 5.10**.: _A homotopy \(G\)-algebra is differential graded algebra \(V\) with a differential \(M_{1}\) and a product \(M_{2}\) such that the shift \(S^{-1}V\) is a brace algebra with brace maps \(b_{n}\). The product differential and the product must satisfy the following compatibility identities. Let \(x,x_{1},x_{2},y_{1},\ldots,y_{n}\in S^{-1}V\). We demand_
\[Sb_{n}(S^{-1}M_{2}(Sx_{1},Sx_{2});y_{1},\ldots,y_{n})=\] \[\sum_{k=0}^{n}(-1)^{(|x_{2}|+1)\sum_{i=1}^{k}|y_{i}|}M_{2}(b_{k}( x_{1};y_{1},\ldots,y_{k}),b_{n-k}(x_{2};y_{k+1},\ldots,y_{n}))\]
_and_
\[Sb_{n}(S^{-1}M_{1}(Sx);y_{1},\ldots,y_{n})-M_{1}(Sb_{n}(x;y_{1}, \ldots,y_{n}))\] \[-(-1)^{|x|+1}\sum_{p=1}^{n}(-1)^{\sum_{i=1}^{p}|y_{i}|}Sb_{n}(x;y _{1},\ldots,M_{1}(Sy_{p}),\ldots,y_{n})\] \[= -(-1)^{(|x|+1)|y_{i}|}M_{2}(Sy_{1},Sb_{n-1}(x;y_{2},\ldots,y_{n}))\] \[+(-1)^{|x|+1}\sum_{p=1}^{n-1}(-1)^{n-1+\sum_{i=1}^{p}|y_{i}|}Sb_{ n-1}(x;y_{1},\ldots,M_{2}(Sy_{p},Sy_{p+1}),\ldots y_{n})\] \[-(-1)^{|x|+\sum_{i=1}^{n-1}|y_{i}|}M_{2}(Sb_{n-1}(x;y_{1},\ldots, y_{n-1}),Sy_{n})\]
Notice that our signs are slightly different to those in [10] as a consequence of our conventions. Our signs will be a particular case of those in Definition 5.11, which are set so that Corollary 5.12 holds in consistent way with operadic suspension and all the shifts that the authors of [10] do not consider.
We now introduce \(J\)-algebras as an \(A_{\infty}\)-generalization of homotopy \(G\)-algebras. This will allow us to generalize the Deligne conjecture to the \(A_{\infty}\)-setting.
**Definition 5.11**.: _A \(J\)-algebra \(V\) is an \(A_{\infty}\)-algebra with structure maps \(\{M_{j}\}_{j\geq 1}\) such that the shift \(S^{-1}V\) is a brace algebra. Furthermore, the braces and the \(A_{\infty}\)-structure satisfy the following compatibility relations. Let \(x,x_{1},\ldots,x_{j},y_{1},\ldots,y_{n}\in S^{-1}V\). For \(n\geq 0\) we demand_
\[(-1)^{\sum_{i=1}^{n}(n-i)|y_{i}|}Sb_{n}(S^{-1}M_{1}(Sx);y_{1}, \ldots,y_{n})=\] \[\sum_{\begin{subarray}{c}l+k-1=n\\ 1\leq i_{1}\leq n-k+1\end{subarray}} (-1)^{\varepsilon}M_{l}(Sy_{1},\ldots,Sb_{k}(x;y_{i_{1}}, \ldots),\ldots,Sy_{n})\] \[-(-1)^{|x|}\hskip-14.226378pt\sum_{\begin{subarray}{c}l+k-1=n\\ 1\leq i_{1}\leq n-k+1\end{subarray}} (-1)^{\eta}Sb_{k}(x;y_{1},\ldots,S^{-1}M_{l}(Sy_{i_{1}}, \ldots),\ldots,y_{n})\]
_where_
\[\varepsilon=\sum_{v=1}^{i_{1}-1}|y_{v}|(|x|-k+1)+\sum_{v=1}^{k}|y_{i_{1}+v-1}|(k- v)+(l-i_{1})|x|.\]
_and_
\[\eta=\sum_{v=1}^{i_{1}-1}(k-v)|y_{v}|+\sum_{v=1}^{i_{1}-1}l|y_{v}|+\sum_{v=i_{1 }}^{i_{1}+l-1}(k-i_{1})|y_{v}|+\sum_{v=i_{1}}^{n-l}(k-v)|y_{v+l}|\]
_For \(j>1\) we demand_
\[(-1)^{\sum_{i=1}^{n}(n-i)|y_{i}|}Sb_{n}(S^{-1}M_{j}(Sx_{1},\ldots,Sx _{j});y_{1},\ldots,y_{n})=\] \[\sum(-1)^{\varepsilon}M_{l}(Sy_{1},\ldots,Sb_{k_{1}}(x_{1};y_{i_{ 1}},\ldots),\ldots,Sb_{k_{j}}(x_{j};y_{i_{j}},\ldots),\ldots,Sy_{n}).\]
_The unindexed sum runs over all possible choices of non-negative integers that satisfy \(l+k_{1}+\cdots+k_{j}-j=n\) and over all possible ordering-preserving insertions. The right hand side sign is given by_
\[\varepsilon= \sum_{\begin{subarray}{c}1\leq t\leq j\\ 1\leq v\leq k_{t}\end{subarray}}|y_{i_{t}+v-1}|(k_{t}-v)+\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
theory where projectivity cannot be guaranteed. In 2008, Sagave introduced the notion of derived \(A_{\infty}\)-algebras, providing a framework for not necessarily projective modules over an arbitrary commutative ground ring [10].
In this section we recall some definitions and results about derived \(A_{\infty}\)-algebras and present some new ways of interpreting them in terms of operads and collections. We also recall the notion of filtered \(A_{\infty}\)-algebra, since it will play a role in obtaining derived \(A_{\infty}\)-algebras from \(A_{\infty}\)-algebras on totalization.
### Derived \(A_{\infty}\)-algebras
In the following definition we use the notation in [11].
**Definition 6.1**.: _A derived \(A_{\infty}\)-algebra on a \((\mathbb{Z},\mathbb{Z})\)-bigraded \(R\)-module \(A\) consist of a family of \(R\)-linear maps_
\[m_{ij}:A^{\otimes j}\to A\]
_of bidegree \((i,2-(i+j))\) for each \(j\geq 1\), \(i\geq 0\), satisfying the equation_
\[\sum_{\begin{subarray}{c}u=i+p,v=j+q-1\\ j=r+1+t\end{subarray}}(-1)^{rq+t+pj}m_{ij}(1^{\otimes r}\otimes m_{pq}\otimes 1 ^{\otimes t})=0 \tag{22}\]
_for all \(u\geq 0\) and \(v\geq 1\)._
According to the above definition, there are two equivalent ways of defining the operad of derived \(A_{\infty}\)-algebras \(d\mathcal{A}_{\infty}\) depending on the underlying category. One of them works on the category of bigraded modules \(\mathrm{bgMod}_{R}\) and the other one is suitable for the category of vertical bicomplexes \(\mathrm{vbC}_{R}\). We give the two of them here as we are going to use both.
**Definition 6.2**.: _The operad \(d\mathcal{A}_{\infty}\) in \(\mathrm{bgMod}_{R}\) is the operad generated by \(\{m_{ij}\}_{i\geq 0,j\geq 1}\) subject to the derived \(A_{\infty}\)-relation_
\[\sum_{\begin{subarray}{c}u=i+p,v=j+q-1\\ j=r+1+t\end{subarray}}(-1)^{rq+t+pj}\gamma(m_{ij};1^{r},m_{pq},1^{t})=0\]
_for all \(u\geq 0\) and \(v\geq 1\)._
_The operad \(d\mathcal{A}_{\infty}\) in \(\mathrm{vbC}_{R}\) is the quasi-free operad generated by \(\{m_{ij}\}_{(i,j)\neq(0,1)}\) with vertical differential given by_
\[\partial_{\infty}(m_{uv})=-\sum_{\begin{subarray}{c}u=i+p,v=j+q-1\\ j=r+1+t,(i,j)\neq(0,1)\neq(p,q)\end{subarray}}(-1)^{rq+t+pj}\gamma(m_{ij};1^{ r},m_{pq},1^{t}).\]
**Definition 6.3**.: _Let \(A\) and \(B\) be derived \(A_{\infty}\)-algebras with respective structure maps \(m^{A}\) and \(m^{B}\). An \(\infty\)-morphism of derived \(A_{\infty}\)-algebras \(f:A\to B\) is a family of maps
\(f_{st}:A^{\otimes t}\to B\) of bidegree \((s,1-s-t)\) satisfying_
\[\sum_{\begin{subarray}{c}u=i+p,v=j+q-1\\ j=r+1+t\end{subarray}}(-1)^{rq+t+pj}f_{ij}(1^{\otimes r}\otimes m_{pq}^{A} \otimes 1^{\otimes s})=\sum_{\begin{subarray}{c}u=i+p_{1}+\dots+p_{j}\\ v=q_{1}+\dots+q_{j}\end{subarray}}(-1)^{\epsilon}m_{ij}^{B}(f_{p_{1}q_{1}} \otimes\dots\otimes f_{p_{j}q_{j}}) \tag{23}\]
_for all \(u\geq 0\) and \(v\geq 1\), where_
\[\epsilon=u+\sum_{1\leq w<l\leq j}q_{w}(1-p_{l}-q_{l})+\sum_{w=1}^{j}p_{w}(j-w).\]
**Example 6.4**.:
1. _An_ \(A_{\infty}\)_-algebra is the same as a derived_ \(A_{\infty}\)_-algebra such that_ \(m_{ij}=0\) _for all_ \(i>0\)_._
2. _One can check that, on any derived_ \(A_{\infty}\)_-algebra_ \(A\)_, the maps_ \(d_{i}=(-1)^{i}m_{i1}\) _define a twisted complex structure. This leads to the possibility of defining a derived_ \(A_{\infty}\)_-algebra as a twisted complex with some extra structure, see Remark_ 8.4_._
Analogously to Definition 3.5, we have the following.
**Definition 6.5**.: _A derived \(A_{\infty}\)-multiplication on a bigraded operad \(\mathcal{O}\) is a map of operads \(d\mathcal{A}_{\infty}\to\mathcal{O}\)._
### Filtered \(A_{\infty}\)-algebras
We will make use of the filtration induced by the totalization functor in order to relate classical \(A_{\infty}\)-algebras to derived \(A_{\infty}\)-algebras. For this reason, we recall the notion of filtered \(A_{\infty}\)-algebras.
**Definition 6.6**.: _A filtered \(A_{\infty}\)-algebra is an \(A_{\infty}\)-algebra \((A,m_{i})\) together with a filtration \(\{F_{p}A^{i}\}_{p\in\mathbb{Z}}\) on each \(R\)-module \(A^{i}\) such that for all \(i\geq 1\) and all \(p_{1},\dots,p_{i}\in\mathbb{Z}\) and \(n_{1},\dots,n_{i}\geq 0\),_
\[m_{i}(F_{p_{1}}A^{n_{1}}\otimes\dots\otimes F_{p_{i}}A^{n_{i}})\subseteq F_{ p_{1}+\dots+p_{i}}A^{n_{1}+\dots+n_{i}+2-i}.\]
_Remark 6.7_.: Consider \(\mathcal{A}_{\infty}\) as an operad in filtered complexes with the trivial filtration and let \(K\) be a filtered complex. There is a one-to-one correspondence between filtered \(A_{\infty}\)-algebra structures on \(K\) and morphisms of operads in filtered complexes \(\mathcal{A}_{\infty}\to\underline{\mathrm{End}}_{K}\) (recall \(\underline{\mathrm{Hom}}\) from Definition 2.8). To see this, notice that if one forgets the filtrations such a map of operads gives an \(A_{\infty}\)-algebra structure on \(K\). The fact that this is a map of operads in filtered complexes implies that all the \(m_{i}\)'s respect the filtrations.
Since the image of \(\mathcal{A}_{\infty}\) lies in \(\mathrm{End}_{K}=F_{0}\underline{\mathrm{End}}_{K}\), if we regard \(\mathcal{A}_{\infty}\) as an operad in cochain complexes, then we get a one-to-one correspondence between filtered \(A_{\infty}\)-algebra structures on \(K\) and morphisms of operads in cochain complexes \(\mathcal{A}_{\infty}\to\mathrm{End}_{K}\).
**Definition 6.8**.: _A morphism of filtered \(A_{\infty}\)-algebras from \((A,m_{i},F)\) to \((B,m_{i},F)\) is an \(\infty\)-morphism \(f:(A,m_{i})\to(B,m_{i})\) of \(A_{\infty}\)-algebras such that each map \(f_{j}:A^{\otimes j}\to A\) is compatible with filtrations, i.e._
\[f_{j}(F_{p_{1}}A^{n_{1}}\otimes\cdots\otimes F_{p_{j}}A^{n_{j}})\subseteq F_{p _{1}+\cdots+p_{j}}B^{n_{1}+\cdots+n_{j}+1-j},\]
_for all \(j\geq 1\), \(p_{1},\ldots p_{j}\in\mathbb{Z}\) and \(n_{1},\ldots,n_{j}\geq 0\)._
We will study the notions from this section from an operadic point of view. For this purpose we introduce some useful constructions in the next section.
## 7. Operadic totalization and vertical operadic suspension
In this section we apply the totalization functor defined in Section 2.2.3 to operads, defining a functor from operads in brigraded modules (resp. twisted complexes) to operads in graded modules (resp. cochain complexes). We also define a bigraded version of operadic suspension. The combination of these two devices provides the signs required to encode derived \(A_{\infty}\)-algebras in a very concise and practical way, similar to what we achieve for classical \(A_{\infty}\)-algebras in Section 3.
### Operadic totalization
We use Proposition 2.21 and the fact that the image of an operad under a lax monoidal functor is also an operad [11, Proposition 3.1.1(a)] to guarantee that applying totalization on an operad will result again in an operad.
Therefore, let \(\mathcal{O}\) be either a bigraded operad, i.e. an operad in te category of bigraded \(R\)-modules or an operad in twisted complexes. We define \(\operatorname{Tot}(\mathcal{O})\) as the operad of graded \(R\)-modules (or cochain complexes) for which
\[\operatorname{Tot}(\mathcal{O}(n))^{d}=\bigoplus_{i<0}\mathcal{O}(n)_{i}^{d-i }\oplus\prod_{i\geq 0}\mathcal{O}(n)_{i}^{d-i}\]
is the image of \(\mathcal{O}(n)\) under the totalization functor, and the insertion maps are given by the composition
\[\operatorname{Tot}(\mathcal{O}(n))\otimes\operatorname{Tot}(\mathcal{O}(m)) \xrightarrow{\mu}\operatorname{Tot}(\mathcal{O}(n)\otimes\mathcal{O}(m)) \xrightarrow{\operatorname{Tot}(\circ_{r})}\operatorname{Tot}(\mathcal{O}(n +m-1)), \tag{24}\]
that is explicitly
\[(x\bar{\circ}_{r}y)_{k}=\sum_{k_{1}+k_{2}=k}(-1)^{k_{1}d_{2}}x_{k_{1}}\circ_{r }y_{k_{2}}\]
for \(x=(x_{i})_{i}\in\operatorname{Tot}(\mathcal{O}(n))^{d_{1}}\) and \(y=(y_{j})_{j}\in\operatorname{Tot}(\mathcal{O}(m))^{d_{2}}\).
More generally, operadic composition \(\bar{\gamma}\) is defined by the composite
\[\operatorname{Tot}(\mathcal{O}(N))\otimes\operatorname{Tot}( \mathcal{O}(a_{1}))\otimes\cdots\otimes\operatorname{Tot}(\mathcal{O}(a_{N}))\] \[\xrightarrow{\mu}\operatorname{Tot}(\mathcal{O}(N)\otimes \mathcal{O}(a_{1})\otimes\cdots\otimes\mathcal{O}(a_{N}))\xrightarrow{ \operatorname{Tot}(\gamma)}\operatorname{Tot}\left(\mathcal{O}\left(\sum a _{i}\right)\right),\]
This map can be computed explicitly by iteration of the insertion \(\bar{\circ}\), giving the following.
**Lemma 7.1**.: _The operadic composition \(\bar{\gamma}\) on \(\operatorname{Tot}(\mathcal{O})\) is given by_
\[\bar{\gamma}(x;x^{1},\dots,x^{N})_{k}=\sum_{k_{0}+k_{1}+\dots+k_{N}=k}(-1)^{ \varepsilon}\gamma(x_{k_{0}};x^{1}_{k_{1}},\dots,x^{N}_{k_{N}})\]
_for \(x=(x_{k})_{k}\in\operatorname{Tot}(\mathcal{O}(N))^{d_{0}}\) and \(x^{i}=(x^{i}_{k})_{k}\in\operatorname{Tot}(\mathcal{O}(a_{i}))^{d_{i}}\), where_
\[\varepsilon=\sum_{j=1}^{m}d_{j}\sum_{i=0}^{j-1}k_{i} \tag{25}\]
_and \(\gamma\) is the operadic composition on \(\mathcal{O}\)._
Notice that the sign is precisely the same appearing in Equation (3).
### Vertical operadic suspension and totalization
On a bigraded operad we can use operadic suspension on the vertical degree with analogue results to those of the graded case that we explored in Section 3.
We define \(\Lambda(n)=S^{n-1}R\), where \(S\) is a vertical shift of degree so that \(\Lambda(n)\) is the underlying ring \(R\) concentrated in bidegree \((0,n-1)\). As in the single graded case, we express the basis element of \(\Lambda(n)\) as \(e^{n}=e_{1}\wedge\dots\wedge e_{n}\).
The operad structure on the bigraded \(\Lambda=\{\Lambda(n)\}_{n\geq 0}\) is the same as in the graded case, see Equation (7).
**Definition 7.2**.: _Let \(\mathcal{O}\) be a bigraded linear operad. The vertical operadic suspension \(\mathfrak{s}\mathcal{O}\) of \(\mathcal{O}\) is given arity-wise by \(\mathfrak{s}\mathcal{O}(n)=\mathcal{O}(n)\otimes\Lambda(n)\) with diagonal composition. Similarly, we define the vertical operadic desuspension \(\mathfrak{s}^{-1}\mathcal{O}(n)=\mathcal{O}(n)\otimes\Lambda^{-}(n)\)._
We may identify the elements of \(\mathcal{O}\) with the elements of \(\mathfrak{s}\mathcal{O}\).
**Definition 7.3**.: _For \(x\in\mathcal{O}(n)\) of bidegree \((k,d-k)\), its natural bidegree in \(\mathfrak{s}\mathcal{O}\) is the pair \((k,d+n-k-1)\). To distinguish both degrees we call \((k,d-k)\) the internal bidegree of \(x\), since this is the degree that \(x\) inherits from the grading of \(\mathcal{O}\)._
If we write \(\circ_{r+1}\) for the operadic insertion on \(\mathcal{O}\) and \(\tilde{\circ}_{r+1}\) for the operadic insertion on \(\mathfrak{s}\mathcal{O}\), we may find a relation between the two insertion maps in a completely analogous way to Lemma 3.3.
**Lemma 7.4**.: _For \(x\in\mathcal{O}(n)\) and \(y\in\mathcal{O}(m)^{q}_{l}\) we have_
\[x\tilde{\circ}_{r+1}y=(-1)^{(n-1)q+(n-1)(m-1)+r(m-1)}x\circ_{r+1}y. \tag{26}\]
_Remark 7.5_.: As can be seen, this is the same sign as the single-graded operadic suspension but with vertical degree. In particular, this operation leads to the Lie bracket from [11], which implies that \(m=\sum_{i,j}m_{ij}\) is a derived \(A_{\infty}\)-multiplication if and only if for all \(u\geq 0\)
\[\sum_{i+j=u}\sum_{l,k}(-1)^{i}m_{jl}\tilde{\circ}m_{ik}=0. \tag{27}\]
In [11, Proposition 2.15] this equation is described in terms of a sharp operator \(\sharp\).
We also get the bigraded version of Theorem 3.9 and the functorial properties that we studied for the single-graded case in Section 3.1 and Proposition 3.14.
Now we are going to combine vertical operadic suspension and totalization. More precisely, the _totalized vertical suspension_ of a bigraded operad \(\mathcal{O}\) is the graded operad \(\operatorname{Tot}(\mathfrak{s}\mathcal{O})\). This operad has an insertion map explicitly given by
\[(x\star_{r+1}y)_{k}=\sum_{k_{1}+k_{2}=k}(-1)^{(n-1)(d_{2}-k_{2}-m+1)+(n-1)(m-1 )+r(m-1)+k_{1}d_{2}}x_{k_{1}}\circ_{r+1}y_{k_{2}} \tag{28}\]
for \(x=(x_{i})_{i}\in\operatorname{Tot}(\mathfrak{s}\mathcal{O}(n))^{d_{1}}\) and \(x=(x_{j})_{j}\in\operatorname{Tot}(\mathfrak{s}\mathcal{O}(m))^{d_{2}}\). As usual, denote
\[x\star y=\sum_{r=0}^{m-1}x\star_{r+1}y.\]
This star operation is precisely the star operation from [11, SS5.1], i.e. the convolution operation on \(\operatorname{Hom}((dAs)^{\operatorname{i}},\operatorname{End}_{A})\). In particular, we can recover the Lie bracket from in [11]. We will do this in Corollary 7.11.
Before continuing, let us show a lemma that allows us to work only with the single-graded operadic suspension if needed.
**Proposition 7.6**.: _For a bigraded operad \(\mathcal{O}\) we have an isomorphism \(\operatorname{Tot}(\mathfrak{s}\mathcal{O})\cong\mathfrak{s}\operatorname{ Tot}(\mathcal{O})\), where the suspension on the left hand side is the bigraded version and on the right hand side is the single-graded version._
Proof.: Note that we may identify each element \(x=(x_{k}\otimes e^{n})_{k}\in\operatorname{Tot}(\mathfrak{s}\mathcal{O}(n))\) with the element \(x=(x_{k})_{k}\otimes e^{n}\in\mathfrak{s}\operatorname{Tot}(\mathcal{O}(n))\). Thus, for an element \((x_{k})_{k}\in\operatorname{Tot}(\mathfrak{s}\mathcal{O}(n))\) the isomorphism is given by
\[f:\operatorname{Tot}(\mathfrak{s}\mathcal{O}(n))\cong\mathfrak{s} \operatorname{Tot}(\mathcal{O}(n)),\,(x_{k})_{k}\mapsto((-1)^{kn}x_{k})_{k}\]
Clearly, this map is bijective so we just need to check that it commutes with insertions. Recall from Equation (28) that the insertion on \(\operatorname{Tot}(\mathfrak{s}\mathcal{O})\) is given by
\[(x\star_{r+1}y)_{k}=\sum_{k_{1}+k_{2}=k}(-1)^{(n-1)(d_{2}-k_{2}-n+1)+(n-1)(m-1) +r(m-1)+k_{1}d_{2}}x_{k_{1}}\circ_{r+1}y_{k_{2}}\]
for \(x=(x_{i})_{i}\in\operatorname{Tot}(\mathfrak{s}\mathcal{O}(n))^{d_{1}}\) and \(y=(y_{j})_{j}\in\operatorname{Tot}(\mathfrak{s}\mathcal{O}(m))^{d_{2}}\). Similarly, we may compute the insertion on \(\mathfrak{s}\mathrm{Tot}(\mathcal{O})\) by combining the sign produced first by \(\mathfrak{s}\). This results in the following insertion map
\[(x\star_{r+1}^{\prime}y)_{k}=\sum_{k_{1}+k_{2}=k}(-1)^{(n-1)(d_{2}-n+1)+(n-1)( m-1)+r(m-1)+k_{1}(d_{2}-m+1)}x_{k_{1}}\circ_{r+1}y_{k_{2}}.\]
Now let us show that \(f(x\star y)=f(x)\star f(y)\). We have that \(f((x\star_{r+1}y))_{k}\) equals
\[\sum_{k_{1}+k_{2}=k}(-1)^{k(n+m-1)+(n-1)(d_{2}-k_{2}-n+1)+(n-1)(m- 1)+r(m-1)+k_{1}d_{2}}x_{k_{1}}\circ_{r+1}y_{k_{2}}\] \[=\sum_{k_{1}+k_{2}=k}(-1)^{(n-1)(d_{2}-n+1)+(n-1)(m-1)+r(m-1)+k_ {1}(d_{2}-m+1)}f(x_{k_{1}})\circ_{r+1}f(y_{k_{2}})\] \[=(f(x)\star_{r+1}f(y))_{k}\]
as desired.
_Remark 7.7_.: As we mentioned in Remark 2.22, there exist other possible ways of totalizing by varying the natural transformation \(\mu\). For instance, we can choose the totalization functor \(\operatorname{Tot}^{\prime}\) which is the same as \(\operatorname{Tot}\) but with a natural transformation \(\mu^{\prime}\) defined in such a way that the insertion on \(\operatorname{Tot}^{\prime}(\mathcal{O})\) is defined by
\[(x\diamond y)_{k}=\sum_{k_{1}+k_{2}=k}(-1)^{k_{2}n_{1}}x_{k_{1}}\circ y_{k_{2 }}.\]
This is also a valid approach for our purposes and there is simply a sign difference, but we have chosen our convention to be consistent with other conventions, such as the derived \(A_{\infty}\)-equation. However, it can be verified that \(\operatorname{Tot}^{\prime}(\mathfrak{s}\mathcal{O})=\mathfrak{s}\mathrm{ Tot}^{\prime}(\mathcal{O})\). With the original totalization we have a non identity isomorphism given by Proposition 7.6. Similar relations can be found among the other alternatives mentioned in Remark 2.22.
Using the operadic structure on \(\operatorname{Tot}(\mathfrak{s}\mathcal{O})\), we can describe derived \(A_{\infty}\)-multiplications in a new conceptual way as we did in Lemma 3.6, with analogous proof.
**Lemma 7.8**.: _A derived \(A_{\infty}\)-multiplication on a bigraded operad \(\mathcal{O}\) is equivalent to an element \(m\in\operatorname{Tot}(\mathfrak{s}\mathcal{O})\) of degree 1 concentrated in positive arity such that \(m\star m=0\). _
From Lemma 7.8, we can proceed as in the proof of Proposition 5.3 to show that \(m\) determines an \(A_{\infty}\)-algebra structure on \(S\mathrm{Tot}(\mathfrak{s}\mathcal{O})\cong S\mathfrak{s}\mathrm{Tot}( \mathcal{O})\). The goal now is showing that this \(A_{\infty}\)-structure on \(S\mathrm{Tot}(\mathfrak{s}\mathcal{O})\) is equivalent to a derived \(A_{\infty}\)-structure on \(S\mathfrak{s}\mathcal{O}\) and compute the structure maps explicitly. We will do this in Section 8.
Before that, let us explore the brace structures that appear from this new operadic constructions and use them to reinterpret derived \(\infty\)-morphisms and their composition.
### Bigraded braces and totalized braces
We are going to define a brace structure on \(\operatorname{Tot}(\mathfrak{s}\mathcal{O})\) using totalization. First note that one can define bigraded braces just like in the single-graded case, only changing the sign \(\varepsilon\) in Definition 4.1 to be \(\varepsilon=\sum_{p=1}^{n}\sum_{q=i}^{i_{p}}\langle x_{p},y_{q}\rangle\) according to the bigraded sign convention.
As one might expect, we can define bigraded brace maps \(b_{n}\) on a bigraded operad \(\mathcal{O}\) and also on its operadic suspension \(\mathfrak{s}\mathcal{O}\), obtaining similar signs as in the single-graded case, but with vertical (internal) degrees, see Proposition 4.3.
We can also define braces on \(\operatorname{Tot}(\mathfrak{s}\mathcal{O})\) via operadic composition. In this case, these are usual single-graded braces. More precisely, we define the maps
\[b_{n}^{\star}:\operatorname{Tot}(\mathfrak{s}\mathcal{O}(N))\otimes \operatorname{Tot}(\mathfrak{s}\mathcal{O}(a_{1}))\otimes\cdots\otimes \operatorname{Tot}(\mathfrak{s}\mathcal{O}(a_{n}))\to\operatorname{Tot}( \mathfrak{s}\mathcal{O}(N-\sum a_{i}))\]
using the operadic composition \(\gamma^{\star}\) on \(\operatorname{Tot}(\mathfrak{s}\mathcal{O})\) as
\[b_{n}^{\star}(x;x_{1},\dots,x_{n})=\sum\gamma^{\star}(x;1,\dots,1,x_{1},1, \dots,1,x_{n},1,\dots,1),\]
where the sum runs over all possible ordering preserving insertions. The brace map \(b_{n}^{\star}(x;x_{1},\dots,x_{n})\) vanishes whenever \(n>N\) and \(b_{0}^{\star}(x)=x\).
Operadic composition can be described in terms of insertions in the obvious way, namely
\[\gamma^{\star}(x;y_{1},\dots,y_{N})=(\cdots(x\star_{1}y_{1})\star_{1+a(y_{1})} y_{2}\cdots)\star_{1+\sum a(y_{p})}y_{N}, \tag{29}\]
where \(a(y_{p})\) is the arity of \(y_{p}\). If we want to express this composition in terms of the composition in \(\mathcal{O}\) we just have to find out the sign factor applying the same strategy as in the single-graded case. In fact, as we said, there is a sign factor that comes from vertical operadic suspension that is identical to the graded case, but replacing internal degree by internal vertical degree. This is the sign that determines the brace \(b_{n}\) on \(\mathfrak{s}\mathcal{O}\). Explicitly, it is given by the following lemma, whose proof is identical to the single-graded case, see Proposition 4.3.
**Lemma 7.9**.: _For \(x\in\mathfrak{s}\mathcal{O}(N)\) and \(x_{i}\in\mathfrak{s}\mathcal{O}(a_{i})\) of internal vertical degree \(q_{i}\) (\(1\leq i\leq n\)), we have_
\[b_{n}(x;x_{1},\dots,x_{n})=\sum_{N-n=h_{0}+\dots+h_{n}}(-1)^{\eta}\gamma(x \otimes 1^{\otimes h_{0}}\otimes x_{1}\otimes\cdots\otimes x_{n}\otimes 1^{ \otimes h_{n}}),\]
_where_
\[\eta=\sum_{0\leq j<l\leq n}h_{j}q_{l}+\sum_{1\leq j<l\leq n}a_{j}q_{l}+\sum_{j =1}^{n}(a_{j}+q_{j}-1)(n-j)+\sum_{1\leq j\leq l\leq n}(a_{j}+q_{j}-1)h_{l}.\]
The other sign factor is produced by totalization. This was computed in Lemma 7.1. Combining both factors we obtain the following.
**Lemma 7.10**.: _We have_
\[b_{j}^{\star}(x;x^{1},\dots,x^{N})_{k}=\sum_{\begin{subarray}{c}k_{0}+k_{1}+ \dots+k_{N}=k\\ h_{0}+h_{1}+\dots+h_{N}=j-N\end{subarray}}\hskip-14.226378pt(-1)^{\eta+\sum_{j=1 }^{m}d_{j}\sum_{i=0}^{j-1}k_{i}}\gamma(x_{k_{0}};1^{h_{0}},x^{1}_{k_{1}},1^{h_{ 1}},\dots,x^{N}_{k_{N}},1^{h_{N}}) \tag{30}\]
_for \(x=(x_{k})_{k}\in\operatorname{Tot}(\mathfrak{s}\mathcal{O}(N))^{d_{0}}\) and \(x^{i}=(x^{i}_{k})_{k}\in\operatorname{Tot}(\mathfrak{s}\mathcal{O}(a_{i}))^{d _{i}}\), where \(\eta\) is defined in Lemma 7.9._
**Corollary 7.11**.: _For \(\mathcal{O}=\operatorname{End}_{A}\), the endomorphism operad of a bigraded module, the brace \(b_{1}^{\star}(f;g)\) is the operation \(f\star g\) defined in [10] that induces a Lie bracket. More precisely,_
\[[f,g]=b_{1}(f;g)-(-1)^{NM}b_{1}(g;f)\]
_for \(f\in\operatorname{Tot}(\mathfrak{s}\operatorname{End}_{A})^{N}\) and \(g\in\operatorname{Tot}(\mathfrak{s}\operatorname{End}_{A})^{M}\), is the same bracket that was defined in [10]._
Notice that in [10] the sign in the bracket is \((-1)^{(N+1)(M+1)}\), but this is because their total degree differs by one with respect to ours.
### Reinterpretation of derived \(\infty\)-morphisms
Just like we did for graded modules on Section 4.2, for bigraded modules \(A\) and \(B\) we may define the collection \(\operatorname{End}_{B}^{A}=\{\operatorname{Hom}(A^{\otimes n},B)\}_{n\geq 1}\). Recall that this collection has a left module structure \(\operatorname{End}_{B}\circ\operatorname{End}_{B}^{A}\to\operatorname{End}_{B} ^{A}\) over \(\operatorname{End}_{B}\) given by composition of maps. Similarly, given a bigraded module \(C\), we can define composition maps \(\operatorname{End}_{C}^{B}\circ\operatorname{End}_{B}^{A}\to\operatorname{End}_ {C}^{A}\). The collection \(\operatorname{End}_{B}^{A}\) also has an infinitesimal right module structure \(\operatorname{End}_{B}^{A}\circ_{(1)}\operatorname{End}_{A}\to\operatorname{ End}_{B}^{A}\) over \(\operatorname{End}_{A}\) given by insertion of maps.
Similarly to the single-graded case, we may describe derived \(\infty\)-morphisms in terms of the above operations, with analogous proof to Lemma 4.5.
**Lemma 7.12**.: _A derived \(\infty\)-morphism of \(A_{\infty}\)-algebras \(A\to B\) with respective structure maps \(m^{A}\) and \(m^{B}\) is equivalent to an element \(f\in\operatorname{Tot}(\mathfrak{s}\operatorname{End}_{B}^{A})\) of degree 0 concentrated in positive arity such that_
\[\rho(f\circ_{(1)}m^{A})=\lambda(m^{B}\circ f),\]
_where_
\[\lambda:\operatorname{Tot}(\mathfrak{s}\operatorname{End}_{B})\circ \operatorname{Tot}(\mathfrak{s}\operatorname{End}_{B}^{A})\to\operatorname{Tot }(\mathfrak{s}\operatorname{End}_{B}^{A})\]
_is induced by the left module structure on \(\operatorname{End}_{B}^{A}\), and_
\[\rho:\operatorname{Tot}(\mathfrak{s}\operatorname{End}_{B})\circ_{(1)} \operatorname{Tot}(\mathfrak{s}\operatorname{End}_{B}^{A})\to\operatorname{Tot }(\mathfrak{s}\operatorname{End}_{B}^{A})\]
_is induced by the right infinitesimal module structure on \(\operatorname{End}_{B}^{A}\)._
_In addition, the composition of \(\infty\)-morphisms is given by the natural composition_
\[\operatorname{Tot}(\mathfrak{s}\operatorname{End}_{C}^{B})\circ\operatorname{ Tot}(\mathfrak{s}\operatorname{End}_{B}^{A})\to\operatorname{Tot}(\mathfrak{s} \operatorname{End}_{C}^{A}).\]
In the case that \(f:A\to A\) is an \(\infty\)-endomorphism, the above definition can be written in terms of operadic composition as \(f\star m=\gamma^{\star}(m\circ f)\), where \(\gamma^{\star}\) is the composition map derived from the operation \(\star\), see Equation (29).
## 8. The derived \(A_{\infty}\)-structure on an operad
In this section we finally establish the connection between classical and derived \(A_{\infty}\)-algebras through Theorem 8.1. From this, in Theorem 8.3 we are able to obtain explicit derived \(A_{\infty}\)-maps on \(A=S\mathfrak{s}\mathcal{O}\) for a sufficiently bounded operad \(\mathcal{O}\) with a derived \(A_{\infty}\)-multiplication. This opens the door to the formulation and proof on a new version of the Deligne conjecture in Corollary 8.10.
We are going to follow a strategy inspired by the proof of the following theorem to show that there is a derived \(A_{\infty}\)-structure on \(A=S\mathfrak{s}\mathcal{O}\). The proof can be found in [1, Proposition 4.55]. We refer the reader to Section 2.2 to recall the definitions of the categories used.
**Theorem 8.1**.: _Let \((A,d^{A})\in\operatorname{tC}_{R}^{b}\) be a twisted complex horizontally bounded on the right and \(A\) its underlying cochain complex. We have natural bijections_
\[\operatorname{Hom}_{\operatorname{vbOp},d^{A}}(d\mathcal{A}_{ \infty},\operatorname{End}_{A}) \cong\operatorname{Hom}_{\operatorname{vbOp}}(\mathcal{A}_{ \infty},\underline{\underline{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\
for each \(j>0\) we can only have finitely many non-zero components \(m_{ij}\). This situation happens in practice in all known examples of derived \(A_{\infty}\)-algebras so far, some of them are in [12, Remark 6.5], [13], and [1, SS5]. Under this assumption we may replace all direct products by direct sums.
We also need to provide \(A\) with a twisted complex structure. The reason for this is that Theorem 8.1 uses the definition of derived \(A_{\infty}\)-algebras on an underlying twisted complex, see Remark 8.4. This follows from Corollary 8.6, which is a consequence of another version of this theorem that works for bigraded modules, Corollary 8.5.
With these assumption, by Theorem 8.1 we can guarantee the existence of a derived \(A_{\infty}\)-algebra structure on \(A\) provided that \(\operatorname{Tot}(A)\) has an \(A_{\infty}\)-algebra structure.
**Theorem 8.3**.: _Let \(A=S\mathfrak{s}\mathcal{O}\) where \(\mathcal{O}\) is an operad horizontally bounded on the right with a derived \(A_{\infty}\)-multiplication \(m=\sum_{ij}m_{ij}\in\mathcal{O}\). Let \(x_{1}\otimes\cdots\otimes x_{j}\in(A^{\otimes j})_{k}^{d-k}\) and let \(x_{v}=Sy_{v}\) for \(v=1,\ldots,j\) and \(y_{v}\) be of bidegree \((k_{v},d_{v}-k_{v})\). The following maps \(M_{ij}\) for \(j\geq 2\) determine a derived \(A_{\infty}\)-algebra structure on \(A\)._
\[M_{ij}(x_{1},\ldots,x_{j})=(-1)^{\sum_{v=1}^{j}(j-v)(d_{v}-k_{v})}\sum_{l}Sb_{ j}(m_{il};y_{1},\ldots,y_{j}).\]
Note that we abuse of notation and identify \(x_{1}\otimes\cdots\otimes x_{j}\) with an element of \(\operatorname{Tot}(A^{\otimes j})\) with only one non-zero component. For a general element, extend linearly.
Proof.: Since \(m\) is a derived \(A_{\infty}\)-multiplication \(\mathcal{O}\), we have that \(m\star m=0\) when we view \(m\) as an element of \(\operatorname{Tot}(\mathfrak{s}\mathcal{O})\). By Proposition 5.3, this defines an \(A_{\infty}\)-algebra structure on \(S\operatorname{Tot}(\mathfrak{s}\mathcal{O})\) given by maps
\[M_{j}:(S\operatorname{Tot}(\mathfrak{s}\mathcal{O}))^{\otimes j}\to S \operatorname{Tot}(\mathfrak{s}\mathcal{O})\]
induced by shifting brace maps
\[b_{j}^{\star}(m;-):(\operatorname{Tot}(\mathfrak{s}\mathcal{O}))^{\otimes j} \to\operatorname{Tot}(\mathfrak{s}\mathcal{O}).\]
The graded module \(S\operatorname{Tot}(\mathfrak{s}\mathcal{O})\) is endowed with the structure of a filtered complex with differential \(M_{1}\) and filtration induced by the column filtration on \(\operatorname{Tot}(\mathfrak{s}\mathcal{O})\). Note that \(b_{j}^{\star}(m;-)\) preserves the column filtration since every component \(b_{j}^{\star}(m_{ij};-)\) has positive horizontal degree.
Since \(S\operatorname{Tot}(\mathfrak{s}\mathcal{O})\cong\operatorname{Tot}(S \mathfrak{s}\mathcal{O})\), we can view \(M_{j}\) as the image of a morphism of operads of filtered complexes \(f:\mathcal{A}_{\infty}\to\operatorname{End}_{\operatorname{Tot}(S\mathfrak{s} \mathcal{O})}\) in such a way that \(M_{j}=f(\mu_{j})\) for \(\mu_{j}\in\mathcal{A}_{\infty}(j)\).
We now work our way backwards using the strategy also employed by the proof of Theorem 8.1. The isomorphism
\[\operatorname{Hom}_{\operatorname{vbOp}}(\mathcal{A}_{\infty},\underline{ \mathcal{End}}_{\operatorname{Tot}(A)})\cong\operatorname{Hom}_{\operatorname{ COOp}}(\mathcal{A}_{\infty},\operatorname{End}_{\operatorname{Tot}(A)})\]
does not modify the map \(M_{j}\) at all but allows us to see it as a element of \(\underline{\mathcal{F}_{\underline{n}\underline{d}}}_{\mathrm{Tot}(A)}\) of bidegree \((0,2-j)\).
The isomorphism
\[\mathrm{Hom}_{\mathrm{vbOp}}(\mathcal{A}_{\infty},\underline{\mathcal{F}_{ \underline{n}\underline{d}}}_{A})\cong\mathrm{Hom}_{\mathrm{vbOp}}(\mathcal{A }_{\infty},\underline{\mathcal{F}_{\underline{n}\underline{d}}}_{\mathrm{Tot} (A)})\]
in the direction we are following is the result of applying \(\mathrm{Hom}_{\mathrm{vbOp}}(\mathcal{A}_{\infty},-)\) to the map described in Lemma 2.47. Under this isomorphism, \(f\) is sent to the map
\[\mu_{j}\mapsto\mathfrak{Grad}^{-1}\circ c(M_{j},\mu^{-1})=\mathfrak{Grad}^{-1} \circ M_{j}\circ\mu^{-1},\]
where \(c\) is the composition in \(\underline{fC}_{\underline{R}}\). The functor \(\mathfrak{Grad}^{-1}\) decomposes \(M_{j}\) into a sum of maps \(M_{j}=\sum_{i}\widetilde{M}_{ij}\), where each \(\widetilde{M}_{ij}\) is of bidegree \((i,2-j-i)\). More explicitly, let \(A=S\mathfrak{sO}\) and let \(x_{1}\otimes\cdots\otimes x_{j}\in(A^{\otimes j})_{k}^{d-k}\). We abuse of notation and identify \(x_{1}\otimes\cdots\otimes x_{j}\) with an element of \(\mathrm{Tot}(A^{\otimes j})\) with only one non-zero component. For a general element, extend linearly. Then we have
\[\mathfrak{Grad}^{-1}(M_{j}(\mu^{-1}(x_{1}\otimes\cdots\otimes x_ {j}))) =\] \[\mathfrak{Grad}^{-1}(Sb_{j}^{\star}(m;(S^{-1})^{\otimes j}(\mu^{- 1}(x_{1}\otimes\cdots\otimes x_{j})))) =\] \[\sum_{i}(-1)^{id}\sum_{l}Sb_{j}^{\star}(m_{il};(S^{-1})^{\otimes j} (\mu^{-1}(x_{1}\otimes\cdots\otimes x_{j}))) =\] \[\sum_{i}(-1)^{id}\sum_{l}(-1)^{\varepsilon}Sb_{j}(m_{il};(S^{-1}) ^{\otimes j}(\mu^{-1}(x_{1}\otimes\cdots\otimes x_{j}))) = \tag{31}\] \[\sum_{i}\sum_{l}(-1)^{id+\varepsilon}Sb_{j}(m_{il};(S^{-1})^{ \otimes j}(\mu^{-1}(x_{1}\otimes\cdots\otimes x_{j})))\]
so that
\[\widetilde{M}_{ij}(x_{1},\ldots,x_{j})=\sum_{l}(-1)^{id+\varepsilon}Sb_{j}(m_ {il};(S^{-1})^{\otimes j}(\mu^{-1}(x_{1}\otimes\cdots\otimes x_{j}))),\]
where \(b_{j}\) is the brace on \(\mathfrak{sO}\) and \(\varepsilon\) is given in Lemma 7.1.
According to the isomorphism
\[\mathrm{Hom}_{\mathrm{vbOp},d^{A}}(d\mathcal{A}_{\infty},\mathrm{End}_{A}) \cong\mathrm{Hom}_{\mathrm{vbOp}}(\mathcal{A}_{\infty},\underline{\mathcal{F} _{\underline{n}\underline{d}}}_{A}), \tag{32}\]
the maps \(M_{ij}=(-1)^{ij}\widetilde{M}_{ij}\) define an \(A_{\infty}\)-structure on \(S\mathfrak{sO}\). Therefore we now just have to work out the signs. Notice that \(d_{v}\) is the total degree of \(y_{v}\) as an element of \(\mathfrak{sO}\) and recall that \(d\) is the total degree of \(x_{1}\otimes\cdots\otimes x_{j}\in A^{\otimes j}\). Therefore, \(\varepsilon\) can be written as
\[\varepsilon=i(d-j)+\sum_{1\leq v<w\leq j}k_{v}d_{w}.\]
The sign produced by \(\mu^{-1}\), as we saw in Lemma 2.23, is precisely determined by the exponent
\[\sum_{w=2}^{j}d_{w}\sum_{v=1}^{w-1}k_{v}=\sum_{1\leq v<w\leq j}k_{v}d_{w},\]
so this sign cancels the right hand summand of \(\varepsilon\). This cancellation was expected since this sign comes from \(\mu^{-1}\), and operadic composition is defined using \(\mu\), see Equation (24).
Finally, the sign \((-1)^{i(d-j)}\) left from \(\varepsilon\) cancels with \((-1)^{id}\) in Equation (31) and \((-1)^{ij}\) from the isomorphism (32). This means that we only need to consider signs produced by vertical shifts. This calculation has been done previously in Lemma 5.8 and as we claimed the result is
\[M_{ij}(x_{1},\dots,x_{j})=(-1)^{\sum_{v=1}^{j}(j-v)(d_{v}-k_{v})}\sum_{l}Sb_{j} (m_{il};y_{1},\dots,y_{j}).\]
_Remark 8.4_.: Note that as in the case of \(A_{\infty}\)-algebras in \(\mathrm{C}_{R}\) we have two equivalent descriptions of \(A_{\infty}\)-algebras in \(\mathrm{tC}_{R}\).
1. A twisted complex \((A,d^{A})\) together with a morphism \(\mathcal{A}_{\infty}\to\underline{\mathpzc{T}\mathpzc{nd}}_{A}\) of operads in \(\mathrm{vbC}_{R}\), which is determined by a family of elements \(M_{i}\in\underline{t\mathcal{C}_{R}}(A^{\otimes i},A)_{0}^{2-i}\) for \(i\geq 2\) for which the \(A_{\infty}\)-relations holds for \(i\geq 2\), Equation (1). The composition is the one prescribed by the composition morphisms of \(\underline{t\mathcal{C}_{R}}\).
2. A bigraded module \(A\) together with a family of elements \(M_{i}\in\underline{\mathpzc{g}\mathpzc{M}\mathpzc{od}}_{R}(A^{\otimes i},A)_{ 0}^{2-i}\) for \(i\geq 1\) for which all the \(A_{\infty}\)-relations hold, see Equation (1). The composition is prescribed by the composition morphisms of \(\underline{\mathpzc{g}\mathpzc{M}\mathpzc{od}}_{R}\).
Since the composition morphism in \(\underline{\mathpzc{g}\mathpzc{M}\mathpzc{od}}_{R}\) is induced from the one in \(\underline{t\mathcal{C}_{R}}\) by forgetting the differential, these two presentations are equivalent, see [10].
This equivalence allows us to formulate the following alternative version of Theorem 8.1.
**Corollary 8.5**.: _Given a bigraded module \(A\) horizontally bounded on the right we have isomorphisms_
\[\mathrm{Hom}_{\mathrm{bgOp}}(d\mathcal{A}_{\infty},\mathrm{End} _{A}) \cong\mathrm{Hom}_{\mathrm{bgOp}}(\mathcal{A}_{\infty},\underline{ \mathpzc{T}\mathpzc{nd}}_{A})\] \[\cong\mathrm{Hom}_{\mathrm{bgOp}}(\mathcal{A}_{\infty},\underline{ \mathpzc{T}\mathpzc{nd}}_{\mathrm{Tot}(A)})\] \[\cong\mathrm{Hom}_{\mathrm{fOp}}(\mathcal{A}_{\infty},\underline {\mathrm{End}}_{\mathrm{Tot}(A)}),\]
_where \(\mathrm{bgOp}\) is the category of operads of bigraded modules and \(\mathrm{fOp}\) is the category of operads of filtered modules._
Proof.: Let us look at the first isomorphism
\[\operatorname{Hom}_{\operatorname{bgOp}}(\mathcal{A}_{\infty},\underline{\underline{ \mathcal{F}_{\underline{n}\underline{d}}}}_{A})\cong\operatorname{Hom}_{ \operatorname{bgOp}}(d\mathcal{A}_{\infty},\operatorname{End}_{A}).\]
Let \(f:\mathcal{A}_{\infty}\to\underline{\underline{\mathcal{F}_{\underline{n} \underline{d}}}}_{A}\) be a map of operads in \(\operatorname{bgOp}\). This is equivalent to maps in \(\operatorname{bgOp}\)
\[\mathcal{A}_{\infty}(j)\to\underline{\underline{\mathcal{F}_{\underline{n} \underline{d}}}}_{A}(j)\]
for each \(j\geq 1\), which are determined by elements \(M_{j}\coloneqq f(\mu_{j})\in\underline{\underline{\mathcal{F}_{\underline{n} \underline{d}}}}_{A}(j)\) for \(v\geq 1\) of bidegree \((0,2-j)\) satisfying the \(A_{\infty}\)-equation with respect to the composition in \(\underline{\underline{\mathcal{F}_{\underline{n}\underline{d}}}}_{R}\). Moreover, \(M_{j}\coloneqq(\tilde{m}_{0j},\tilde{m}_{1j},\dots)\) where \(\tilde{m}_{ij}\coloneqq(M_{j})_{i}:A^{\otimes n}\to A\) is a map of bidegree \((i,2-i-j)\). Since the composition in \(\underline{\underline{\mathcal{F}_{\underline{n}\underline{d}}}}_{R}\) is the same as in \(\underline{\mathcal{U}_{R}}\), the computation of the \(A_{\infty}\)-equation becomes analogous to the computation done in [2, Prop 4.47], showing that the maps \(m_{ij}=(-1)^{i}\tilde{m}_{ij}\) for \(i\geq 0\) and \(j\geq 0\) define a derived \(A_{\infty}\)-algebra structure on \(A\).
The second isomorphism
\[\operatorname{Hom}_{\operatorname{bgOp}}(\mathcal{A}_{\infty},\underline{ \underline{\mathcal{F}_{\underline{n}\underline{d}}}}_{A})\cong\operatorname{ Hom}_{\operatorname{bgOp}}(\mathcal{A}_{\infty},\underline{\underline{\mathcal{F}_{ \underline{n}\underline{d}}}}_{\operatorname{Tot}(A)})\]
follows from the bigraded module case of Lemma 2.46. Finally, the isomorphism
\[\operatorname{Hom}_{\operatorname{bgOp}}(\mathcal{A}_{\infty},\underline{ \underline{\mathcal{F}_{\underline{n}\underline{d}}}}_{\operatorname{Tot}(A) })\cong\operatorname{Hom}_{\operatorname{Hop}}(\mathcal{A}_{\infty},\underline {\underline{\operatorname{F}_{\underline{n}\underline{d}}}}_{\operatorname{Tot }(A)})\]
is analogous to the last isomorphism of Theorem 8.1, replacing the quasi-free relation by the full \(A_{\infty}\)-algebra relations.
According to Corollary 8.5, if we have an \(A_{\infty}\)-algebra structure on \(A=S\mathfrak{s}\mathcal{O}\), we can consider its arity \(1\) component \(M_{1}\in\underline{\operatorname{End}}_{\operatorname{Tot}(A)}\) and split it into maps \(M_{i1}\in\operatorname{End}_{A}\). Since these maps must satisfy the derived \(A_{\infty}\)-relations, they define a twisted complex structure on \(A\). The next corollary describes the maps \(M_{i1}\) explicitly.
**Corollary 8.6**.: _Let \(\mathcal{O}\) be a bigraded operad with a derived \(A_{\infty}\)-multiplication and let \(M_{i1}:S\mathfrak{s}\mathcal{O}\to S\mathfrak{s}\mathcal{O}\) be the arity 1 derived \(A_{\infty}\)-algebra maps induced by Corollary 8.5 from \(M_{1}:\operatorname{Tot}(S\mathfrak{s}\mathcal{O})\to\operatorname{Tot}(S \mathfrak{s}\mathcal{O})\). Then_
\[M_{i1}(x)=\sum_{l}(Sb_{1}(m_{il};S^{-1}x)-(-1)^{\langle x,m_{il}\rangle}Sb_{1}( S^{-1}x;m_{il})),\]
_where \(x\in(S\mathfrak{s}\mathcal{O})_{k}^{d-k}\) and \(\langle x,m_{il}\rangle=ik+(1-i)(d-1-k)\)._
Proof.: Notice that the proof of Corollary 8.5 is essentially the same as the proof Theorem 8.1. This means that the proof of this result is an arity \(1\) restriction of the proof of Theorem 8.3. Thus, we apply Equation (31) to the case \(j=1\). Recall that for \(x\in(S\mathfrak{s}\mathcal{O})_{k}^{d-k}\),
\[M_{1}(x)=b_{1}^{\star}(m;S^{-1}x)-(-1)^{n-1}b_{1}^{\star}(S^{-1}x;m).\]
In this case, there is no \(\mu\) involved. Therefore, introducing the final extra sign \((-1)^{i}\) from the proof of Theorem 8.3, we get from Equation (31) that
\[\widetilde{M}_{i1}(x)=(-1)^{i}\sum_{l}((-1)^{id+i(d-1)}Sb_{1}(m_{il};S^{-1}x)-( -1)^{d-1+id+k}Sb_{1}(S^{-1}x;m_{il})),\]
where \(b_{1}\) is the brace on \(\mathfrak{s}\mathcal{O}\). Simplifying signs we obtain
\[\widetilde{M}_{i1}(x)=\sum_{l}Sb_{1}(m_{il};S^{-1}x)-(-1)^{\langle m_{il},x \rangle}Sb_{1}(m_{il};S^{-1}x))=M_{i1}(x),\]
where \(\langle m_{il},x\rangle=ik+(1-i)(d-1-k)\), as claimed.
### Derived Deligne conjecture
Note that the maps given by Theorem 8.3 and Corollary 8.6 formally look the same as their single-graded analogues in Lemma 5.8 but with an extra index that is fixed for each \(M_{ij}\). This means that we can follow the same procedure as in Section 5.1 to define higher derived \(A_{\infty}\)-maps on the Hochschild complex of a derived \(A_{\infty}\)-algebra. More precisely, given an operad \(\mathcal{O}\) with a derived multiplication and \(A=S\mathfrak{s}\mathcal{O}\), we will define a derived \(A_{\infty}\)-algebra structure on \(S\mathfrak{s}\operatorname{End}_{A}\). We will then connect the algebraic structure on \(A\) with the structure on \(S\mathfrak{s}\operatorname{End}_{A}\) through braces. This connection will allow us to formulate and show a new version of the Deligne conjecture.
Let \(\overline{B}_{j}\) be the bigraded brace map on \(\mathfrak{s}\operatorname{End}_{S\mathfrak{s}\mathcal{O}}\) and consider the maps
\[\overline{M}^{\prime}_{ij}:(\mathfrak{s}\operatorname{End}_{S\mathfrak{s} \mathcal{O}})^{\otimes j}\to\mathfrak{s}\operatorname{End}_{S\mathfrak{s} \mathcal{O}} \tag{33}\]
defined as
\[\overline{M}^{\prime}_{ij}(f_{1},\ldots,f_{j})=\overline{B}_{j}(M_ {i\bullet};f_{1},\ldots,f_{j}) j>1,\] \[\overline{M}^{\prime}_{i1}(f)=\overline{B}_{1}(M_{i\bullet};f)-( -1)^{ip+(1-i)q}\overline{B}_{1}(f;M_{i\bullet}),\]
for \(f\) of natural bidegree \((p,q)\), where \(M_{i\bullet}=\sum_{j}M_{ij}\). We define
\[\overline{M}_{ij}:(S\mathfrak{s}\operatorname{End}_{S\mathfrak{s}\mathcal{O}} )^{\otimes j}\to S\mathfrak{s}\operatorname{End}_{S\mathfrak{s}\mathcal{O}},\ \overline{M}_{ij}=\overline{\sigma}(M^{\prime}_{ij})=S\circ M^{\prime}_{ij} \circ(S^{\otimes n})^{-1}.\]
As in the single-graded case we can define a map \(\Phi:S\mathfrak{s}\mathcal{O}\to S\mathfrak{s}\operatorname{End}_{S \mathfrak{s}\mathcal{O}}\) as the map making the following diagram commute
(34)
where
\[\Phi^{\prime}\colon\mathfrak{s}\mathcal{O}\to\operatorname{End}_{s\mathcal{O}},\,x\mapsto\sum_{n\geq 0}b_{n}(x;-).\]
In this setting we have the bigraded version of Theorem 5.7. But before stating the theorem, for the sake of completeness let us state the definition of the Hochschild complex of a bigraded module.
**Definition 8.7**.: _We define the Hochschild cochain complex of a bigraded module \(A\) to be the bigraded module \(S\mathfrak{s}\operatorname{End}_{A}\). If \((A,d)\) is a vertical bicomplex, then the Hochschild complex has a vertical differential given by \(\partial(f)=[d,f]=d\circ f-(-1)^{q}f\circ d\), where \(f\) has natural bidigree \((p,q)\) and \(\circ\) is the plethysm corresponding to operadic insertions._
In particular, \(S\mathfrak{s}\operatorname{End}_{S\mathfrak{s}\mathcal{O}}\) is the Hochschild cochain complex of \(S\mathfrak{s}\mathcal{O}\). If \(\mathcal{O}\) has a derived \(A_{\infty}\)-multiplication, then the differential of the Hochschild complex \(S\mathfrak{s}\operatorname{End}_{S\mathfrak{s}\mathcal{O}}\) is given by \(\overline{M}_{01}\) from Equation (33).
The following is the same as Theorem 5.7 but carrying the extra index \(i\) and using the bigraded sign conventions.
**Theorem 8.8**.: _The map \(\Phi\) defined in diagram (34) above is a morphism of derived \(A_{\infty}\)-algebras, i.e. for all \(i\geq 0\) and \(j\geq 1\) the equation_
\[\Phi(M_{ij})=\overline{M}_{ij}(\Phi^{\otimes j})\]
_holds. _
Now that we have Theorem 8.8 and the explicit formulas for the derived \(A_{\infty}\)-structure on \(S\mathfrak{s}\mathcal{O}\), we can deduce the derived version of the Deligne conjecture in an analogous way to how we obtained the \(A_{\infty}\)-version in Corollary 5.12. In order to do that, we need to first introduce the derived \(A_{\infty}\)-version of homotopy \(G\)-algebras. To have a more succinct formulation we use the notation \(\operatorname{vdeg}(x)\) for the vertical degree of \(x\).
**Definition 8.9**.: _A derived \(J\)-algebra\(V\) is a derived \(A_{\infty}\)-algebra with structure maps \(\{M_{ij}\}_{i\geq 0,j\geq 1}\) such that the shift is \(S^{-1}V\) a brace algebra. Furthermore, the braces and the derived \(A_{\infty}\)-structure satisfy the following compatibility relations. Let \(x,x_{1},\dots,x_{j},y_{1},\dots,y_{n}\in S^{-1}V\). For all \(n,i\geq 0\) we demand_
\[(-1)^{\sum_{i=1}^{n}(n-v)\operatorname{vdeg}(y_{v})}Sb_{n}(S^{-1}M _{i1}(Sx);y_{1},\dots,y_{n})=\] \[\sum_{\begin{subarray}{c}l+k-1=n\\ 1\leq i_{1}\leq n-k+1\end{subarray}}\hskip-14.226378pt(-1)^{\sigma}M_{il}(Sy_{1 },\dots,Sb_{k}(x;y_{i_{1}},\dots),\dots,Sy_{n})\] \[-(-1)^{\langle x,M_{il}\rangle}\hskip-14.226378pt\sum_{ \begin{subarray}{c}l+k-1=n\\ 1\leq i_{1}\leq n-k+1\end{subarray}}\hskip-14.226378pt(-1)^{\eta}Sb_{k}(x;y_{1},\dots,S^{-1}M_{il}(Sy_{i_{1}},\dots,),\dots,y_{n})\]
_where_
\[\varepsilon=\sum_{v=1}^{i_{1}-1}\langle Sy_{v},S^{1-k}x\rangle+\sum_{v=1}^{k} \operatorname{vdeg}(y_{i_{1}+v-1})(k-v)+(l-i_{1})\operatorname{vdeg}(x).\]
_and_
\[\eta =\sum_{v=1}^{i_{1}-1}(k-v)\mathrm{vdeg}(y_{v})+l\sum_{v=1}^{i_{1}-1} \mathrm{vdeg}(y_{v})\] \[\quad+\sum_{v=i_{1}}^{i_{1}+l-1}(k-i_{1})\mathrm{vdeg}(y_{v})+\sum_ {v=i_{1}}^{n-l}(k-v)\mathrm{vdeg}(y_{v+l})\]
_For \(j>1\) we demand_
\[(-1)^{\sum_{i=1}^{n}(n-v)\mathrm{vdeg}(y_{v})}Sb_{n}(S^{-1}M_{ij} (Sx_{1},\dots,Sx_{j});y_{1},\dots,y_{n})=\] \[\sum(-1)^{\varepsilon}M_{il}(Sy_{1},\dots,Sb_{k_{1}}(x_{1};y_{i_{1 }},\dots),\dots,Sb_{k_{j}}(x_{j};y_{i_{j}},\dots),\dots,Sy_{n}).\]
_The unindexed sum runs over all possible choices of non-negative integers that satisfy \(l+k_{1}+\dots+k_{j}-j=n\) and over all possible ordering preserving insertions. The right hand side sign is given by_
\[\varepsilon =\sum_{\begin{subarray}{c}1\leq t\leq j\\ 1\leq v\leq k_{t}\end{subarray}}\mathrm{vdeg}(y_{i_{t}+v-1})(k_{v}-v)+\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
known examples of derived \(A_{\infty}\)-algebras. These examples usually come as minimal models of dgas. So a first question that arises is the following.
**Question 1**.: _Are there any conditions on a dga that guarantee that its minimal model is horizontally bounded on the right?_
An answer to this question would give us a better understanding of how general our results are. In fact, it is open whether a derived \(A_{\infty}\)-structure can be obtained for a more general operad. Even though we needed to use some monoidality results that require boundedness, the explicit maps that we obtain in Theorem 8.3 can be defined for any operad with a derived \(A_{\infty}\)-multiplication. A first idea would be attempting a direct computation to see if they satisfy the derived \(A_{\infty}\)-equation, see Equation (22). Of course, we would like to use a more conceptual approach. So more generally the question would be the following.
**Question 2**.: _Can we define a derived \(A_{\infty}\)-structure on any operad with a derived \(A_{\infty}\)-multiplication?_
### Hochschild Cohomology
The classical Deligne conjecture states that the Hoschschild complex of an associative algebra has a structure of homotopy \(G\)-algebra [10]. This has implications on the Hochschild cohomology of the associative algebra. Namely, the homotopy \(G\)-algebra structure on the Hoschschild complex induces a Gerstenhaber algebra structure on cohomology. We would like to extend this result to derived \(A_{\infty}\)-algebras.
Let us review the structure on the Hochschild complex of an associative operad in order to understand the question that we will be asking about the derived \(A_{\infty}\)-case.
Let \(\mathcal{O}\) be an operad with an associative multiplication \(m\), i.e. an \(A_{\infty}\)-multiplication \(m\) such that \(m=m_{2}\), see Definition 3.5. In this case, as a consequence of Proposition 5.3 or by [10, Proposition 2], we have a dg-algebra structure on \(S\mathfrak{s}\mathcal{O}\) given by the differential
\[d(Sx)=Sb_{1}(m;x)-(-1)^{|x|}Sb_{1}(x;m)\]
and the multiplication
\[m(Sx,Sy)=Sb_{2}(m;x,y).\]
In particular, if \(\mathcal{O}=\operatorname{End}_{A}\) is the endomorphism operad of an associative algebra \(A\), these maps provide a dg-algebra structure on the Hochschild complex of \(A\). But this is not all the structure that we get. Since any operad is a brace algebra, we have an interaction between the dg-algebra and the brace structure. More precisely, \(\mathcal{O}\) has a structure of _homotopy \(G\)-algebra_, see Definition 2 and Theorem 3 of [10].
Given the algebraic structure described above on the Hochschild complex of an associative algebra, we can then take cohomology with respect to \(d\) to compute the Hochschild cohomology of \(A\), denoted by \(HH^{*}(A)\). It is known that \(m\) and the bracket
\[[x,y]=Sb_{1}(x;y)-(-1)^{|x||y|}Sb_{1}(y;x)\]
induce a structure of a Gerstenhaber algebra on \(HH^{*}(A)\)[13, Corollary 5]. The proof relies on some identities that can be deduced from the definition of homotopy \(G\)-algebra, such as graded homotopy commutativity.
If we try to replicate this argument for \(A_{\infty}\)-algebras, the structure we get on the Hochschild complex is that of a \(J\)-algebra, see Definition 5.11. In this case, we have to compute cohomology with respect to \(M_{1}\), see Lemma 5.8. In the definition of \(J\)-algebras, we encounter an explosion in the number and complexity of relations and maps involved with respect to homotopy \(G\)-algebras. Therefore, the resulting structure has not been feasible to manipulate and it is not very clear what kind of algebraic structure is induced on cohomology. The derived case is of course even more difficult to handle as we would need to work with the even more complex derived \(J\)-algebras, see Definition 8.9. In addition, it is possible to consider vertical and horizontal cohomologies [16, SS1.2]. These should be taken with respect to \(M_{01}\) and \(M_{11}\) respectively, see Corollary 8.6. So the natural question to ask is the following.
**Question 3**.: _What algebraic structure do derived \(J\)-algebras induce on the vertical and horizontal cohomologies of a derived \(A_{\infty}\)-algebra?_
## Appendix A Combinatorics
**Lemma A.1**.: _For any integers \(n\) and \(m\), the following equality holds mod 2._
\[\binom{n+m-1}{2}+\binom{n}{2}+\binom{m}{2}=(n-1)(m-1).\]
Proof.: Let us compute
\[\binom{n+m-1}{2}+\binom{n}{2}+\binom{m}{2}+(n-1)(m-1)\mod 2.\]
By definition, this equals
\[\frac{(n+m-1)(n+m-2)}{2}+\frac{n(n-1)}{2}+\frac{m(m-1)}{2}+(n-1)(m -1)\] \[=\frac{(n^{2}+2nm-2n+m^{2}-2m-n-m+2)+(n^{2}-n)+(m^{2}-m)+2(nm-n-m +1)}{2}\] \[=n^{2}+2nm-3n+m^{2}-3m+2=n^{2}+m+m^{2}+m=0\mod 2\]
as desired, because \(n^{2}=n\mod 2\)
## Appendix B Koszul sign on operadic suspension
The purpose of this section is to clear up the procedure to apply the Koszul sign rule in situations in which operadic suspension is involved.
Let \(\operatorname{End}_{A}\) be the endomorphism operad of some \(R\)-module \(A\) and consider the operadic suspension \(\mathfrak{s}\operatorname{End}_{A}\). Let \(f\otimes e^{n}\in\mathfrak{s}\operatorname{End}_{A}(n)\) be of degree \(\deg(f)+n-1\). For \(a\in A^{\otimes n}\) we have
\[(f\otimes e^{n})(a)=(-1)^{\deg(a)(n-1)}f(a)\otimes e^{n}\]
because \(\deg(e^{n})=n-1\). Note that \(f\otimes e^{n}=g\otimes e^{n}\) if and only if \(f=g\). In addition, it is not possible that \(f\otimes e^{n}=g\otimes e^{m}\) for \(n\neq m\).
If we take the tensor product of \(f\otimes e^{n}\in\mathfrak{s}\operatorname{End}_{A}(n)\) and \(g\otimes e^{m}\in\mathfrak{s}\operatorname{End}_{A}(m)\) and apply it to \(a\otimes b\in A^{\otimes n}\otimes A^{\otimes m}\), we have
\[((f\otimes e^{n})\otimes(g\otimes e^{m}))(a\otimes b)= (-1)^{\deg(a)(\deg(g)+m-1)}(f\otimes e^{n})(a)\otimes(g\otimes e^{ m})(b)\] \[= (-1)^{\varepsilon}(f(a)\otimes e^{n})\otimes(f(b)\otimes e^{m}),\]
where \(\varepsilon=\deg(a)(\deg(g)+m-1)+\deg(a)(n-1)+\deg(b)(m-1)\). The last remark that we want to make is the case of a map of the form
\[f(1^{\otimes k-1}\otimes g\otimes 1^{\otimes n-k})\otimes e^{m+n-1}\in \mathfrak{s}\operatorname{End}_{A}(n+m-1),\]
such as those produced by the operadic insertion \(\mathfrak{s}f\tilde{\circ}_{k}\mathfrak{s}g\). In this case, this map applied to \(a_{k-1}\otimes b\otimes a_{n-k}\in A^{\otimes k-1}\otimes A^{\otimes m} \otimes A^{\otimes n-k}\) results in
\[(f(1^{\otimes k-1}\otimes g\otimes 1^{\otimes n-k})\otimes e^{m+n-1})( a_{k-1}\otimes b\otimes a_{n-k})=\] \[(-1)^{(m+n)(\deg(a_{k-1})+\deg(b)+\deg(a_{n-k}))}(f(1^{\otimes k- 1}\otimes g\otimes 1^{\otimes n-k}(a_{k-1}\otimes b\otimes a_{n-k}))\otimes e^{m+n-1}\]
To go from the first line to the second, we switch \(e^{m+n-1}\) of degree \(m+n-2\) with \(a_{k-1}\otimes b\otimes a_{n-k}\). If we apply the usual sign rule for graded maps we obtain
\[(-1)^{(m+n)(\deg(a_{k-1})+\deg(b)+\deg(a_{n-k}))+\deg(a_{k-1})\deg(g)}f(a_{k-1 }\otimes g(b)\otimes a_{n-k})\otimes e^{m+n-1}.\]
The purpose of this last remark is not only review the Koszul sign rule but also remind that the insertion \(\mathfrak{s}f\tilde{\circ}_{k}\mathfrak{s}g\) is of the above form, so that the \(e^{m+n-1}\) component is always at the end and does not play a role in the application of the sign rule with the composed maps. In other words, it does not affect their individual degrees, just the degree of the overall composition.
## Appendix C Sign of the braces
In order to find the sign of the braces on \(\mathfrak{s}\operatorname{End}_{A}\), let us use an analogous strategy to the one used in [11, Appendix] to find the signs of the Lie bracket \([f,g]\) on \(\operatorname{End}_{A}\).
Let \(A\) be a graded module. Let \(SA\) be the graded module with \(SA^{v}=A^{v+1}\), and so the _suspension_ or _shift_ map \(S:A\to SA\) given by the identity map has degree \(-1\). Let \(f\in\operatorname{End}_{A}(N)^{i}=\operatorname{Hom}_{R}(A^{\otimes N},A)^{i}\). Recall that \(\sigma\) is the inverse of the map from Theorem 3.9, so that \(\sigma(f)=S\circ f\circ(S^{-1})^{\otimes N}\in\operatorname{End}_{A}(N)^{i+N-1}\).
_Remark C.1_.: In [10] there is a sign \((-1)^{N+i-1}\) in front of \(f\), but it seems to be irrelevant for our purposes. Another fact to remark on is that the suspension of graded modules used here and in [10] is the opposite that we have used to define the operadic suspension in the sense that in Section 3 we used \(SA^{v}=A^{v-1}\). This does not change the signs or the procedure, but in the statement of Theorem 3.9, operadic desuspension should be changed to operadic suspension.
Notice that by the Koszul sign rule
\[(S^{-1})^{\otimes N}\circ S^{\otimes N}=(-1)^{\sum_{j=1}^{N-1}j}1=(-1)^{\frac{ N(N-1)}{2}}1=(-1)^{\binom{N}{2}}1,\]
so \((S^{-1})^{\otimes N}=(-1)^{\binom{N}{2}}(S^{\otimes N})^{-1}\). For this reason, given \(F\in\operatorname{End}_{S(A)}(m)^{j}\), we have
\[\sigma^{-1}(F)=(-1)^{\binom{m}{2}}S^{-1}\circ F\circ S^{\otimes m}\in \operatorname{End}_{A}(m)^{j-m+1}.\]
For \(g_{j}\in\operatorname{End}_{A}(a_{j})^{q_{j}}\), let us write \(f[g_{1},\dots,g_{n}]\) for the map
\[\sum_{k_{0}+\dots+k_{n}=N-n}f(1^{\otimes k_{0}}\otimes g_{1}\otimes 1^{ \otimes k_{1}}\otimes\dots\otimes g_{n}\otimes 1^{\otimes k_{n}})\in \operatorname{End}_{A}(N-n+\sum a_{j})^{i+\sum q_{j}}.\]
We define
\[b_{n}(f;g_{1},\dots,g_{n})=\sigma^{-1}(\sigma(f)[\sigma(g_{1}),\dots,\sigma(g_ {n})])\in\operatorname{End}_{A}(N-n+\sum a_{j})^{i+\sum q_{j}}.\]
With this the definition we can prove the following.
**Lemma C.2**.: _We have_
\[b_{n}(f;g_{1},\dots,g_{n})=\sum_{N-n=k_{0}+\dots+k_{n}}(-1)^{\eta}f(1^{\otimes k _{0}}\otimes g_{1}\otimes\dots\otimes g_{n}\otimes 1^{\otimes k_{n}}),\]
_where_
\[\eta=\sum_{0\leq j<l\leq n}k_{j}q_{l}+\sum_{1\leq j<l\leq n}a_{j}q_{l}+\sum_{j =1}^{n}(a_{j}+q_{j}-1)(n-j)+\sum_{1\leq j\leq l\leq n}(a_{j}+q_{j}-1)k_{l}.\]
Proof.: Let us compute \(\eta\) using the definition of \(b_{n}\).
\[\sigma^{-1}(\sigma(f)[\sigma(g_{1}),\ldots,\sigma(g_{n})])\] \[=(-1)^{\binom{N-n+\sum a_{j}}{2}}S^{-1}\circ(\sigma(f)(1^{\otimes k_ {0}}\otimes\sigma(g_{1})\otimes 1^{\otimes k_{1}}\otimes\cdots\otimes\sigma(g_{n}) \otimes 1^{\otimes k_{n}}))\circ S^{\otimes N-n+\sum a_{j}}\] \[=(-1)^{\binom{N-n+\sum a_{j}}{2}}S^{-1}\circ S\circ f\circ(S^{-1 })^{\otimes N}\circ\] \[(1^{\otimes k_{0}}\otimes(S\circ g_{1}\circ(S^{-1})^{\otimes a_{ 1}})\otimes 1^{\otimes k_{1}}\otimes\cdots\otimes(S\circ g_{n}\circ(S^{-1})^{ \otimes a_{n}})\otimes 1^{\otimes k_{n}}))\circ S^{\otimes N-n+\sum a_{j}}\] \[=(-1)^{\binom{N-n+\sum a_{j}}{2}}f\circ((S^{-1})^{k_{0}}\otimes S ^{-1}\otimes\cdots\otimes S^{-1}\otimes(S^{-1})^{k_{n}})\] \[\circ(1^{\otimes k_{0}}\otimes(S\circ g_{1}\circ(S^{-1})^{ \otimes a_{1}})\otimes\cdots\otimes(S\circ g_{n}\circ(S^{-1})^{\otimes a_{n}} )\otimes 1^{\otimes k_{n}}))\circ S^{\otimes N-n+\sum a_{j}}.\]
Now we move each \(1^{\otimes k_{j-1}}\otimes S\circ g_{j}\circ(S^{-1})^{a_{j}}\) to apply \((S^{-1})^{k_{j-1}}\otimes S^{-1}\) to it. Doing this for all \(j=1,\ldots,n\) produces a sign
\[(-1)^{(a_{1}+q_{1}-1)(n-1+\sum k_{l})+(a_{2}+q_{2}-1)(n-2+\sum_{ 2}^{n}k_{l})+\cdots+(a_{n}+q_{n}-1)k_{n}}\] \[=(-1)^{\sum_{j=1}^{n}(a_{j}+q_{j}-1)(n-j+\sum_{j}^{n}k_{l})},\]
and we denote the exponent by
\[\varepsilon=\sum_{j=1}^{n}(a_{j}+q_{j}-1)(n-j+\sum_{j}^{n}k_{l}).\]
So now we have that, decomposing \(S^{\otimes N-n+\sum a_{j}}\), the last map up to multiplication by \((-1)^{\binom{N-n+\sum a_{j}}{2}+\varepsilon}\) is
\[f\circ((S^{-1})^{k_{0}}\otimes g_{1}\circ(S^{-1})^{\otimes a_{1}}\otimes \cdots\otimes g_{n}\circ(S^{-1})^{\otimes a_{n}}\otimes(S^{-1})^{k_{n}}) \circ(S^{\otimes k_{0}}\otimes S^{\otimes a_{1}}\otimes\cdots\otimes S^{ \otimes a_{n}}\otimes S^{\otimes k_{n}}).\]
Now we turn the tensor of inverses into inverses of tensors by introducing the appropriate signs. More precisely, we introduce the sign
\[(-1)^{\delta}=(-1)^{\binom{k_{0}}{2}+\sum\left(\binom{a_{j}}{2}+\binom{k_{j}}{ 2}\right)}. \tag{35}\]
Therefore we have up to multiplication by \((-1)^{\binom{N-n+\sum a_{j}}{2}+\varepsilon+\delta}\) the map
\[f\circ((S^{k_{0}})^{-1}\otimes g_{1}\circ(S^{\otimes a_{1}})^{-1}\otimes \cdots\otimes g_{n}\circ(S^{\otimes a_{n}})^{-1}\otimes(S^{k_{n}})^{-1}) \circ(S^{\otimes k_{0}}\otimes S^{\otimes a_{1}}\otimes\cdots\otimes S^{ \otimes a_{n}}\otimes S^{\otimes k_{n}}).\]
The next step is moving each component of the last tensor product in front of its inverse. This will produce the sign \((-1)^{\gamma}\), where
\[\gamma =-k_{0}\sum_{1}^{n}(k_{j}+a_{j}+q_{j})-a_{1}\left(\sum_{1}^{n}k_ {j}+\sum_{2}^{n}(a_{j}+q_{j})\right)-\cdots-a_{n}k_{n}\] \[=\sum_{j=0}^{n}k_{j}\sum_{l=j+1}^{n}(k_{l}+a_{l}+q_{l})+\sum_{j=1} ^{n}a_{j}\left(\sum_{l=j}^{n}k_{l}+\sum_{l=j+1}^{n}(a_{l}+q_{l})\right)\mod 2. \tag{36}\]
So in the end we have
\[b_{n}(f;g_{1},\dots,g_{n})=\sum_{k_{0}+\dots+k_{n}=N-n}(-1)^{\binom{N-n+\sum a_{j} }{2}+\varepsilon+\delta+\gamma}f(1^{\otimes k_{0}}\otimes g_{1}\otimes 1^{ \otimes k_{1}}\otimes\dots\otimes g_{n}\otimes 1^{\otimes k_{n}}).\]
This means that
\[\eta=\binom{N-n+\sum a_{j}}{2}+\varepsilon+\delta+\gamma.\]
Next, we are going to simplify this sign to get rid of the binomial coefficients.
_Remark C.3_.: If the top number of a binomial coefficient is less than \(2\), then the coefficient is \(0\). In the case of arities or \(k_{j}\) this is because \((S^{-1})^{\otimes 1}=(S^{\otimes 1})^{-1}\), and if the tensor is taken \(0\) times then it is the identity and the equality also holds, so there are no signs.
We are now going to simplify the sign to obtain the desired result. Notice that
\[N-n+\sum_{j}a_{j}=\sum_{i}k_{i}+\sum_{j}a_{j}.\]
In general, consider a finite sum \(\sum_{i}b_{i}\). We can simplify the binomial coefficients mod \(2\) in the following way.
\[\binom{\sum_{i}b_{i}}{2}+\sum_{i}\binom{b_{i}}{2}=\sum_{i<j}b_{i}b_{j}\mod 2.\]
The result of applying this to \(\binom{N-n+\sum a_{j}}{2}\) and adding \(\delta\) from eq. (35) in our sign \(\eta\) is
\[\sum_{0\leq i<l\leq n}k_{i}k_{l}+\sum_{1\leq j<l\leq n}a_{j}a_{l}+\sum_{i,j}k_ {i}a_{j}. \tag{37}\]
Recall \(\gamma\) from Equation (36).
\[\gamma=\sum_{j=0}^{n}k_{j}\sum_{l=j+1}^{n}(k_{l}+a_{l}+q_{l})+\sum_{j=1}^{n}a_ {j}\left(\sum_{l=j}^{n}k_{l}+\sum_{l=j+1}^{n}(a_{l}+q_{l})\right).\]
As we see, all the sums in the previous simplification appear in \(\gamma\) so we can cancel them. Let us rewrite \(\gamma\) in a way that this becomes more clear:
\[\sum_{0\leq j<l\leq n}k_{j}k_{l}+\sum_{0\leq j<l\leq n}k_{j}a_{l}+\sum_{0\leq j <l\leq n}k_{j}q_{l}+\sum_{1\leq j\leq l\leq n}a_{j}k_{l}+\sum_{1\leq j<l\leq n }a_{j}a_{l}+\sum_{1\leq j<l\leq n}a_{j}q_{l}.\]
So after adding the expression (37) modulo \(2\) we have only the terms that include the internal degrees, i.e.
\[\sum_{0\leq j<l\leq n}k_{j}q_{l}+\sum_{1\leq j<l\leq n}a_{j}q_{l}. \tag{38}\]
Let us move now to the \(\varepsilon\) term in the sign to rewrite it.
\[\varepsilon=\sum_{j=1}^{n}(a_{j}+q_{j}-1)(n-j+\sum_{j}^{n}k_{l})=\sum_{j=1}^ {n}(a_{j}+q_{j}-1)(n-j)+\sum_{1\leq j\leq l\leq n}(a_{j}+q_{j}-1)k_{l}\]
We may add this to Equation (38) in such a way that the brace sign becomes
\[\eta=\sum_{0\leq j<l\leq n}k_{j}q_{l}+\sum_{1\leq j<l\leq n}a_{j}q_{l}+\sum_{j=1}^ {n}(a_{j}+q_{j}-1)(n-j)+\sum_{1\leq j\leq l\leq n}(a_{j}+q_{j}-1)k_{l}. \tag{39}\]
as announced at the end of Section 4.
| $A_\infty$アル gebasa が、通常の $A_\infty$ アル gebasa に対して、多くの理論的な優位性があります。しかし、彼らの多層構造により、実用上は扱いが難しくなります。私たちは、演算論の $A_\infty$ アル gebasa に関わるBrace algebra と枠組みを開発することで、 $A_\infty$ アル gebasa を新たな概念的な枠組みで研究することができます。特に、この構築により、 $A_\infty$ アル gebasa のホchschild 複素計数上のLie 幾何学的構造を一般化することができ、デリンゲン予想の新しい、厳格なバージョンを得ることができました。 |
2308.08918 | IMM: An Imitative Reinforcement Learning Approach with Predictive
Representation Learning for Automatic Market Making | Market making (MM) has attracted significant attention in financial trading
owing to its essential function in ensuring market liquidity. With strong
capabilities in sequential decision-making, Reinforcement Learning (RL)
technology has achieved remarkable success in quantitative trading.
Nonetheless, most existing RL-based MM methods focus on optimizing single-price
level strategies which fail at frequent order cancellations and loss of queue
priority. Strategies involving multiple price levels align better with actual
trading scenarios. However, given the complexity that multi-price level
strategies involves a comprehensive trading action space, the challenge of
effectively training profitable RL agents for MM persists. Inspired by the
efficient workflow of professional human market makers, we propose Imitative
Market Maker (IMM), a novel RL framework leveraging both knowledge from
suboptimal signal-based experts and direct policy interactions to develop
multi-price level MM strategies efficiently. The framework start with
introducing effective state and action representations adept at encoding
information about multi-price level orders. Furthermore, IMM integrates a
representation learning unit capable of capturing both short- and long-term
market trends to mitigate adverse selection risk. Subsequently, IMM formulates
an expert strategy based on signals and trains the agent through the
integration of RL and imitation learning techniques, leading to efficient
learning. Extensive experimental results on four real-world market datasets
demonstrate that IMM outperforms current RL-based market making strategies in
terms of several financial criteria. The findings of the ablation study
substantiate the effectiveness of the model components. | Hui Niu, Siyuan Li, Jiahao Zheng, Zhouchi Lin, Jian Li, Jian Guo, Bo An | 2023-08-17T11:04:09 | http://arxiv.org/abs/2308.08918v1 | IMM: An Imitative Reinforcement Learning Approach with Predictive Representation Learning for Automatic Market Making
###### Abstract
Market making (MM) has attracted significant attention in financial trading owing to its essential function in ensuring market liquidity. With strong capabilities in sequential decision-making, Reinforcement Learning (RL) technology has achieved remarkable success in quantitative trading. Nonetheless, most existing RL-based MM methods focus on optimizing single-price level strategies which fail at frequent order cancellations and loss of queue priority. Strategies involving multiple price levels align better with actual trading scenarios. However, given the complexity that multi-price level strategies involves a comprehensive trading action space, the challenge of effectively training profitable RL agents for MM persists. Inspired by the efficient workflow of professional human market makers, we propose Imitative Market Maker (IMM), a novel RL framework leveraging both knowledge from suboptimal signal-based experts and direct policy interactions to develop multi-price level MM strategies efficiently. The framework start with introducing effective state and action representations adopt at encoding information about multi-price level orders. Furthermore, IMM integrates a representation learning unit capable of capturing both short- and long-term market trends to mitigate adverse selection risk. Subsequently, IMM formulates an expert strategy based on signals and trains the agent through the integration of RL and imitation learning techniques, leading to efficient learning. Extensive experimental results on four real-world market datasets demonstrate that IMM outperforms current RL-based market making strategies in terms of several financial criteria. The findings of the ablation study substantiate the effectiveness of the model components.
## Introduction
Market making (MM) is a process where a market maker continuously places buy and sell orders on both sides of the limit order book (LOB) of a given security. During this process, market makers encounter various risks, including inventory risk, adverse selection risk, and non-execution risk, rendering MM a complex trading task. Optimal MM entails dynamic adjustment of bids and asks in response to the market maker's inventory level and current market status to maximize the risk-adjusted returns.
In contrast to traditional MM approaches [1, 13] which rely on mathematical models with strong assumptions, (deep) Reinforcement Learning (RL) has emerged as a promising approach for developing MM strategies capable of adapting to changing market dynamics. While there has been extensive research on the application of RL for MM, the majority of studies have focused on optimising single-price level policies [2, 14, 15]. Unfortunately, it is pointed out that such strategies result in frequent and unnecessary order cancellations, leading to the loss of order priority and non-execution risks [16, 17]. Consequently, strategies enabling traders to place multi-price level orders in advance to reserve good queue positions beyond best bid/ask levels are better suited for realistic MM scenarios. However, given that multi-price level strategies involves a large fine-grained trading action space than the single-price level ones, how to effectively train profitable RL agents for MM remains a challenging problem. Furthermore, the trade-off between profits and the various sources of risk based on personalized risk preferences has not been well addressed. To address these challenges, this paper proposes the Imitative Market Maker (IMM), an RL framework that integrates proficient representation learning and imitation learning techniques, to resolve the optimal MM problem using multi-price level policies.
The insight of IMM comes from the following inspiration: considering the workflow of a skilled human market maker (Figure 1), the professional first gathers both micro- and macro-level market information. The former aids in assessing market liquidity, while the latter contributes to evaluating adverse selection risks. Subsequently, he/she predicts
Figure 1: Workflow of professional human market makers.
short-term and long-term market trends based on these market information. Afterwards, he/she trade off between risks and profits according to their risk preference, leading to the ultimate trading decision ( the multiple prices and volumes of the quotes). Within numerous successful trading firms, this workflow plays a pivotal role in deriving robust and profitable MM strategies. Furthermore, beyond learning from interactions with the environment, extracting trading knowledge from experts presents an appealing method for mining profitable patterns [12]. Leveraging expert data is a favorable approach to improve the exploration ability of RL algorithms[22, 23].
Motivated by these inspirations, IMM integrates a state representation learning unit (SRLU) with an imitative RL unit (IRLU) to facilitate efficient policy learning for MM. Our primary contributions can be summarized as follows:
* IMM introduces effective state and action representations adeptly encoding multi-price level order information. These representations are well-suited for the MM environment, enabling the implementation of order stacking. Furthermore, IMM incorporates multi-granularity predictive signals as auxiliary variables, and employs a temporal convolution and spatial attention (TCSA) network to distill valuable representations from noisy market data.
* IMM utilizes trading knowledge from experts to facilitate effective exploration in the complex trading environment. To meet various risk preferences, IRLU trains customized agents by adjusting utility parameters of a specially crafted reward function.
* Through extensive experiments on four real-world financial futures datasets, we demonstrate that IMM significantly outperforms many baseline methods in relation to both risk-adjusted returns and adverse selection ratios. We underscore IMM's practical applicability to MM through a series of comprehensive exploratory and ablative studies.
## Related Work
Traditional Finance MethodsTraditional approaches for MM in the finance literature [1, 1, 2] consider MM as a stochastic optimal control problem that can be solved analytically. For example, the Avellaneda-Stoikov (AS) model [1] assumes a drift-less diffusion process for the mid-price evolution, then uses the Hamilton-Jacobi-Bellman equations to derive closed-form approximations to the optimal quotes. On a related note, [1] consider a variant of the AS model with inventory limits. However, such methods are typically predicated on a set of strong assumptions, and employ multiple parameters that need to be laboriously calibrated on historical data. It seems promising to consider more advanced methods such as RL that enables learning directly from data in a model-free fashion.
RL-based MethodsRecent years have witnessed a strong popularity of (deep) RL in the field of quantitative trading [1, 1, 2, 3].
The majority of the RL-based MM approaches adopts a single-price level strategy. Among those studies, several methods proposed defining the action space in advance [2, 2]. Several researchers utilized a "half-spread" action space which chooses a continuous half spread on each side of the book [1, 1, 1, 2]. Unfortunately, when actually implementing such a strategy, to change the half spread on each side of the book, it is necessary to actively cancel orders at each time step and place new orders at the new level. This results in frequent unnecessary order cancellations, leading to queue position losing [1, 1].
To overcome this limitation, ladder strategies, which place a unit of volume at all prices in two price intervals, one on each side of the book, has been adopted[1, 1]. The latest work uses variants of this strategy to construct multi-price level MM policies. The \(DRL_{OS}\)[1] agent decides whether to retain one unit of volume on each price level, and the beta policy [1] allows for flexibility in the distribution of order volumes.
However, to provide more accurate trading decisions, multi-price level strategy involves a much complex fine-grained action space (multi-level price and quantity). Little attention has been paid to inefficient exploration problem due to the complex action space of multi-price strategies. While have shown great promise, these approaches might be limited to achieve efficient exploration, particularly in highly dynamic and complex market environments.
## Imitative Market Maker (IMM)
This section introduces the proposed IMM framework. We start with introducing a novel state/action space and illustrating the transition dynamics of MM procedure that accommodates multi-price level order stacking. Subsequently, we elaborate on the SRLU which aims at forecasting multi-granularity signals while extracting valuable representations from the noisy market data. Lastly, we outline the MM policy learning approach, which incorporates the RL and imitation learning objectives.
### Multi-Price Level Strategy
In this subsection, we introduce a novel action space specially crafted to define the multi-price level MM strategy.
In practical MM scenarios, market participants analyze many quantities before sending orders, among which the most important one is the distance between their target price and the "reference market price" \(p_{ref}\), typically the midprice [13]. The LOB can be formulated as a \(2K\)-dimensional vector, where \(K\) denotes the number of available limits on each side. Notably, \(p_{ref}\) serves as the LOB's central point, thereby determining the
positions of the 2K limits \(Q_{\pm i}\), where \(Q_{\pm i}\) represents the limit at the distance \(i-0.5\) ticks to the right (\(+i\)) or left (\(-i\)) of \(p_{ref}\). It is assumed that buy limit orders are placed on the bid side, and sell ones on the ask side. The queue length at \(Q_{i}\) is denoted as \(l_{i}\).
Most existing MM methods adopt the market midprice as \(p_{ref}\) to encode the LOB. However, the specific price linked with level \(i\) may experience frequent shifts due to the dynamic fluctuations in the midprice. Whenever alterations occur in \(p_{ref}\), the corresponding \(l_{i}\) instantaneously transitions to the value of one of its adjacent neighbors. This hinders the extraction of valid micro-market information from LOB. The necessity arises to define a stable reference price.
To this end, this paper formulates \(p_{ref}\) in the subsequent manner: Firstly, we set up a reference price \(\tilde{p}_{ref}\) that follows the midprice. Specifically, in instances where the market spread is odd, \(\tilde{p}_{ref,t}:=m_{t}=\frac{ask_{t}+bidi_{t}}{2}\). When the spread is even, \(\tilde{p}_{ref,t}:=m_{t}\pm\frac{\text{price tick}}{2}\), with the selection of the sign based on proximity to the prior value. Afterwards, at the beginning of an episode, we set \(p_{ref,0}=\tilde{p}_{ref,0}\). Then when the midprice \(m_{t}\) increases (or decreases), only if \(l_{-1,t}=0\) (or \(1_{1,t}=0\)), \(p_{ref,t}\) is updated to \(\tilde{p}_{ref,t}\). Consequently, changes of \(p_{ref}\) are possibly caused by one of the three following events: (1) The insertion of a buy/sell limit order within the bid-ask spread while \(Q_{1}\)/\(Q_{-1}\) is empty. (2) A cancellation of the last limit order at one of the best price. (3) A market order that consumes the last limit order at one of the best offer queues. Note that within our framework, the LOB accommodates empty limits, as depicted in Figure 2. In this way, we obtain a more stable \(p_{ref}\)1, enabling effective encoding of LOB and multi-price level orders.
Footnote 1: See detailed explanations in the supplementary materials.
State SpaceWith such a stable reference price, IMM then effectively encodes both micro-level market information and multi-price level orders. To mitigate adverse selection risk, the state space necessitates the inclusion of macro-level market information1. At time step \(t\), the IMM agent observes the state formulated by:
Footnote 1: See detailed explanations in the supplementary materials.
\[\mathbf{s}_{t}=(\mathbf{s}_{t}^{m},\mathbf{s}_{t}^{s},\mathbf{s}_{t}^{p}), \tag{1}\]
where \(\mathbf{s}_{t}^{m}\) denotes the _market variables_ encoding the current market status; \(\mathbf{s}_{t}^{s}\) denotes the _signal variables_, including multi-granularity auxiliary predictive signals; \(\mathbf{s}_{t}^{p}\) denotes the _private variables_, including: the current inventory \(z_{t}\), the queue position information and volume of the agent's orders that rests on the LOB, denoted by \(s_{t}^{g}=(q_{t}^{-K}...,q_{t}^{-1},q_{t}^{K},...,q_{t}^{K})\) and \(s_{t}^{v}=(v_{t}^{-K}...,v_{t}^{-1},v_{t}^{1},...,v_{t}^{K})\) respectively. Here \(q_{i}\) and \(v_{i}\) denote the queue position information and volume at price level \(i\) respectively. Suppose there are \(m_{i}\) orders resting at level \(i\) placed at different time step. The queue position value of the \(j\)-th order at level \(i\) can be defined as \(q_{t}^{i,j}=\frac{l_{i}^{front,j}}{l_{i}^{i}},\) where \(l_{i}^{front,j}\) is the queue length in front of this order. Thus the queueing position value at price level \(i\) can be defined as the volume weighted average of the queue position values of the \(m_{i}\) orders: \(q_{t}^{i}=\sum\frac{l_{i}^{front,i,j}}{l_{i}^{i}}\frac{v_{i}^{i,j}}{l_{i}^{i}}\). Through covering the information of the current quotes in states, the IMM learns to avoid frequent order cancellations and replacements.
Action SpaceReserving good queue positions beyond best bid/ask levels holds advantages of controlling adverse selection and non-execution risks. Therefore, practitioners tend to deploy order stacking strategies that place limit orders at multiple price levels in advance. IMM introduces an action encoding that expresses the complex multi-price level strategies within a low-dimensional space. At time step \(t\), the action \(\mathbf{a}_{t}\) is defined as
\[\mathbf{a}_{t}=(m_{t}^{*},\delta_{t}^{*},\mathbf{\phi}_{t}^{bid},\mathbf{\phi}_{t}^{ask}), \tag{2}\]
where \(m_{t}^{*}\) and \(\delta_{t}^{*}\) denote the desired quoted midprice w.r.t \(p_{ref}\) and spread respectively. This implies that the agent's target selling price is no lower than \(m_{t}^{*}+\delta_{t}^{*}/2\), and the highest buying price is \(m_{t}^{*}-\delta_{t}^{*}/2\). \(\mathbf{\phi}_{t}^{bid}\) and \(\mathbf{\phi}_{t}^{ask}\) represent parameter vectors that govern the volume distribution of the multi-level quotations. An instance of diverse two-price level actions is illustrated in Figure 3. By adopting such action formulation, the agent gains the flexibility to determine both the width and asymmetry of the quotes with respect to the reference price.
We formulate MM as an episodic RL task. The MM procedure allows for multi-price level order stacking, as specified below: (1) Choose a random start time for the episode and initialize the environment and the simulator. (2) Let the agent choose the desired volumes and price levels at which the agent would like to be positioned in the LOB. (3) Turn these desired positions into orders, including cancelling orders from levels with too much volume and placing new limit orders. (4) Match the orders in market-replay simulator according to the price-time priority. (5) Update the agent's cash and inventory of the traded asset and track profit and loss. (6) Repeat steps (2-4) until the episode terminates. A picture illustration can be found in Figure 3 in the supplementary material.
which are the labels of price movement trend after \(1/6,1,2,5\) minutes respectively. While training the RL policy, the parameters of the pre-trained predictors are frozen, and the outputs constitute the auxiliary signal variables \(\mathbf{s}^{s}\).
Attention-based Representation LearningDeep RL algorithms usually suffer from the low data-efficiency issue. Besides the auxiliary signal prediction, we propose an temporal convolution and spatial attention (TCSA) network to extract additional effective representations from the noisy market data \(\mathbf{x}\). The structure of TCSA is depicted in Figure 4.
The proposed approach IMM first utilizes a temporal convolution network (TCN) [23] block to extract the time-axis relations in the data. Compared to recurrent neural networks, TCN has several appealing properties including parallel computation and longer effective memory. After conducting TCN operations on \(\mathbf{x}\in\mathbb{R}^{F\times L}\) along the time axis, we obtain an output tensor denoted by \(\hat{\mathbf{H}}\in\mathbb{R}^{F\times L}\), where \(F\) is the dimension of features, and \(L\) is the temporal dimension.
Afterward, IMM adopts an attention mechanism [27] to handle the spatial relationships among different features. Given the output vector of TCN, we calculate the spatial attention weight as \(\hat{\mathbf{S}}=\mathbf{V}\cdot\text{sigmoid}\big{(}(\hat{\mathbf{H}}\mathbf{W}_{1})\cdot( \hat{\mathbf{H}}\mathbf{W}_{2})^{T}+\mathbf{b}\big{)},\) where \(\mathbf{W}_{1},\mathbf{W}_{2}\in\mathbb{R}^{L}\), and \(\mathbf{V}\in\mathbb{R}^{F\times F}\) are parameters to learn, \(\mathbf{b}\in\mathbb{R}^{F\times F}\) is the bias vector. The matrix \(\hat{\mathbf{S}}\in\mathbb{R}^{F\times F}\) is then normalized by rows to represent the correlation among features: \(\mathbf{S}_{i,j}=\frac{\exp(\hat{\mathbf{S}}_{i,j})}{\sum_{i=1}^{F}\exp(\hat{\mathbf{S}}_ {i,u})},\forall 1\leq i\leq F\).
We adopt the ResNet [10] structure to alleviate the vanishing gradient problem in deep learning. The final representation abstracted from \(\mathbf{x}\) is denoted by \(H=S\times\hat{\mathbf{H}}+\mathbf{x}\), and it is then translated to a vector with dim \(F^{\prime}\) using a fully connected layer: \(\mathbf{s}^{m}=\text{sigmoid}(W_{4}\cdot\text{ReLU}(\mathbf{H}W_{3}+\mathbf{b}_{3})+\mathbf{b }_{4})\). The representation \(\mathbf{s}^{m}\) is concatenated with the signal state \(\mathbf{s}^{s}\) and private state \(\mathbf{s}^{p}\).
### Imitative Reinforcement Learning Unit (IRLU)
A Signal-Based ExpertTo guide efficient exploration, we define a linear suboptimal rule-based expert strategy named Linear in Trend and Inventory with Inventory Constraints (LTIIC), which is commonly used by human experts. Readers might employ other effective expert strategies if available. \(LTIIC(a,b,c,d)\) corresponds to a strategy where a market maker adjusts its quote prices based on both its inventory level and trend prediction signals. If \(-d\leq z_{t}\leq d\) at time \(t\), the ask and bid orders are placed with prices:
\[\left\{\begin{aligned} & ask_{t}^{q}=& m_{t}+a+b\cdot z_{t}+c \cdot\hat{y}_{t},\\ & bid_{t}^{q}=& m_{t}-a+b\cdot z_{t}+c\cdot\hat{y}_{t}, \end{aligned}\right. \tag{3}\]
where \(a\), \(b\), \(c\) and \(d\) are predetermined parameters, \(m_{t}\) stands for the midprice of the LOB, \(z_{t}\) represents the inventory, and \(\hat{y}_{t}\in\{-1,0,1\}\) signifies a short-term predictive trend signal. The insights of LTIIC strategy lies in that during a short-term upward market trend, ask-side limit orders are more likely to be executed than bid-side ones. A logical approach involves implementing a narrow half-spread on the bid side and a broader half-spread on the ask side, thus reducing the risk exposure due to adverse selection. Simultaneously, the trader adjusts parameter \(b\) to regulate the inventory level, while \(a\) determines the quoted spread. In cases where \(z_{t}>=d\) or \(z_{t}<=-d\), only orders on the side opposite to the inventory are posted.
Policy LearningWe utilized the actor-critic RL framework [12], where the critic evaluates the action taken by the actor by computing the value function, and the actor (policy) is optimized to maximize the value output by the critic. To improve the sample efficiency, we use the off-policy actor-critic method TD3 [23] as the base learner, and the policy \(\pi\) is updated with the deterministic policy gradient [26]:
\[\pi=\arg\max_{\pi}\mathbb{E}_{(\mathbf{s},a)\sim D}\big{[}Q(\mathbf{s},\pi(\mathbf{s})) \big{]}, \tag{4}\]
where \(Q\) is a value function approximating the expected cumulative reward, \(Q(\mathbf{s}_{t},\mathbf{a}_{t})=\mathbb{E}[\sum_{i=t}^{T}\gamma^{i-t}r_{i}|\mathbf{s}_{t },\mathbf{a}_{t}]\).
In Equation (4), \(D\) denotes the replay buffer collected by a behavior policy, which is generated by adding some noise to
Figure 4: The proposed IMM learning framework.
Figure 3: Illustration of action space. Consider a scenario where an agent positions two-level orders on both sides of the LOB, \(\phi_{t}^{ask}\) (the green number) determines the ratio of volume placed at the two adjacent ask price levels (heights of the green bars).
the learned policy \(\pi\). Following the TD3 method Fujimoto et al. (2018), the value function \(Q\) is optimized in a twin delayed manner with the data sampled from both the replay buffer \(D\) and expert dataset \(D_{E}\).
Since the high-dimensional state space, the complex action space, and the stochastic trading environment induce a hard exploration problem, learning with a pure RL objective in Equation (4) is extremely difficult. To promote the policy learning in such complex trading environment, we propose to augment the RL method with the objective of imitating the quoting behavior in an expert dataset \(D_{E}\) as:
\[\pi=\arg\max_{\pi}\mathbb{E}_{(\mathbf{s},\mathbf{a})\sim D}\big{[}Q(\mathbf{s},\pi(\mathbf{s}) )\big{]}-\mathbb{E}_{(\mathbf{s},\mathbf{a})\sim D_{E}}\big{[}\lambda\cdot(\pi(\mathbf{s})- \hat{a})^{2}\big{]}, \tag{5}\]
where \(\lambda\) is a scaling coefficient that balances maximizing the Q values and minimizing the behavior cloning (BC) loss. We set \(\lambda\) decrease with the growth of the training steps.
As the expert dataset contains reasonable suboptimal MM behaviors, the agent benefits from the imitation learning techniques through abstracting advanced trading knowledge. Thus the proposed method could achieve more efficient exploration and policy learning in the highly stochastic market environment compared to the RL methods without imitation learning.
Reward Function for Diverse UtilitiesThe decision process of the market makers is subject to several trade-offs, including probability of execution and spread, inventory risk, and compensation from the exchange. To meet the diverse utilities of market makers, three factors are proposed to be considered:
**Profit and loss (\(PnL\))**. PnL is a natural choice for the problem domain, comprising a realized \(PnL\) term (left part) and a floating \(PnL\) term (right part), given by:
\[PnL_{t}=\bigg{(}\sum_{i\in A_{t}}p_{i}^{a}\cdot v_{i}^{a}-\sum_{j\in B_{t}}p_{ j}^{b}\cdot v_{j}^{b}\bigg{)}+(p_{t+1}-p_{t})\cdot z_{t+1}, \tag{6}\]
where \(p\) denotes the market midprice; \(p^{a},v^{a},p^{b},v^{b}\) represents the price and volume of the filled ask (bid) orders respectively; \(z\) signifies the current inventory, with \(z>0\) when the agent holds a greater long position than short.
**Truncated Inventory Penalty**. To mitigate inventory risk, it is reasonable to introduce an additional inventory dampening term. Considering that advanced market makers may choose to hold a non-zero inventory to exploit clear trends while capturing the spread, an enhanced approach involves applying the dampening term solely to higher-risk inventory levels:
\[IP_{t}=-\eta|z_{t}|\cdot\mathbb{I}(|z_{t}|>C), \tag{7}\]
A penalty for inventory holding is applied solely when the inventory \(z_{t}\) surpasses a constant \(C\).
**The Market Makers' compensation from the exchange** constitutes a primary revenue stream for numerous market makers Fernandez-Tapia (2015). Therefore, ensuring a substantial volume of transactions to secure compensations holds significant importance for a variety of MM companies. To this end, a bonus term is incorporated to encourage transactions of the agent:
\[C_{t}=\beta\bigg{(}\sum_{i\in A_{t}}p_{i}^{a}\cdot v_{i}^{a}+\sum_{j\in B_{t} }p_{j}^{b}\cdot v_{j}^{b}\bigg{)}, \tag{8}\]
Ultimately, by appropriately tuning the parameters \(\eta\) and \(\beta\) based on personalized utilities, IMM ensures alignment with the requirements of a broad spectrum of market makers, employing the combination of these three categories of rewards:
\[\mathcal{R}(\mathbf{s}_{t},\mathbf{a}_{t},\mathbf{s}_{t+1})=PnL_{t}+C_{t}+IP_{t}. \tag{9}\]
## Experiments
### Experimental Setup
We conduct experiments on four datasets comprised of historical data of the spot month contracts of the \(FU\), \(RB\), \(CU\) and \(AG\) futures from the Shanghai Futures Exchange2. The data consists of the 5-depth LOB and aggregated trades information associated with a \(500\)-milliseconds real-time financial period. We use the data from July 2021 to March 2022 (126 trading days) for training with \(20\%\) as the validation set, and test model performance on April 2022 \(\sim\) July 2022 (60 trading days). In each episode, the agent adjusts its 2-level bids and asks every \(500\) milliseconds, with a fixed total volume \(N=20\) units on each side. The episode length is set to 1.5 trading hour, with \(T=10800\) steps.
Footnote 2: Here \(RB\), \(FU\), \(CU\) and \(AG\) refers to the Steel Rebar, Fuel Oil, Copper, and Silver Futures Contracts respectively.
BenchmarksWe compare IMM with three rule-based benchmarks and two state-of-the-art RL-based approaches:
1. **FOIC** represents a Fixed Offset with Inventory Constraints strategy introduced by Gasperov and Kostanjcar (2021). \(FOIC(d)\) refers to the strategy that posts bid (ask) orders at the current best bid (ask) while adhering to the inventory constraint \(d\).
2. **LIIC**. A Linear in Inventory with Inventory Constraints strategy Gasperov and Kostanjcar (2021) corresponds to the strategy where a market maker adjusts its quote prices based on its inventory level. The quotes of LIIC can be formulated as \(LTIIC(a,b,c=0,d)\) using Equation (3).
3. **LITIC** is the expert adopted in IMM.
\begin{table}
\begin{tabular}{l|c c c|c c c|c c c|c c c} \hline & \multicolumn{3}{c|}{RB} & \multicolumn{3}{c|}{FU} & \multicolumn{3}{c|}{CU} & \multicolumn{3}{c}{AG} \\ \hline & EPPL(\(10^{\circ}\)) & MAP(\(\min\)) & PaLMAP & EPPL(\(10^{\circ}\)) & MAP(\(\min\)) & PaLMAP & EPPL(\(10^{\circ}\)) & MAP(\(\min\)) & PaLMAP & EPPL(\(10^{\circ}\)) & MAP(\(\min\)) & PaLMAP(\(\min\)) & PaLMAP(\(\min\)) & PaLMAP(\(\min\)) \\ \hline FORC & 3.25 \(\pm\) 4.35 & 255 \(\pm\) 111 & 14 \(\pm\) 22 & -7.79 \(\pm\) 9.25 & 238 \(\pm\) 135 & -43 \(\pm\) 5.66 & -33.05 \(\pm\) 27.63 & 206 \(\pm\) 141 & -161 \(\pm\) 224 & -48.59 \(\pm\) 28.83 & 189 \(\pm\) 154 & -250 \(\pm\) 335 \\ LIIC & 2.26 \(\pm\) 3.32 & 123 \(\pm\) 30 2 & -29 \(\pm\) 6.89 \(\pm\) 6.66 & 115 \(\pm\) 30 & -66 \(\pm\) 6.69 & -24.19 \(\pm\) 14.83 & 150 \(\pm\) 20 & -164 \(\pm\) 513 & -38.9 \(\pm\) 26.22 & 142 \(\pm\) 45 & -302 \(\pm\) 243 \\ LIIC & 9.16 \(\pm\) 4.87 & 65 \(\pm\) 6.6 & 139 \(\pm\) 6.8 & 8.26 \(\pm\) 26.4 & 52 \(\pm\) 3.3 & 160 \(\pm\) 50 & -16.74 \(\pm\) 15.81 & 112 \(\pm\) 109 & -20.23 & -23.57 \(\pm\) 27.28 & 128 \(\pm\) 22 & -264 \(\pm\) 166 \\ \hline \(RLS_{D}\) & 4.36 \(\pm\) 1.64 & **38** & **41** & 14 \(\pm\) 38 & 7.31 \(\pm\) 5.38 & 7.29 \(\pm\) 46 & -19.77 \(\pm\) 17 & 214 \(\pm\) 109 & -22.92 \(\pm\) 29.85 & -25.43 \(\pm\) 25.83 & 107 \(\pm\) 37 & -237 \(\pm\) 235 \\ \hline \(DRL_{0.05}\) & 4.36 \(\pm\) 3.70 & 51 \(\pm\) 15 6.61 & 11.03 \(\pm\) 13.87 & **37** & **3** & 30 \(\pm\) 36 & -15.98 \(\pm\) 18.02 & 647 \(\pm\) 2967 & -99.14 \(\pm\) 147 & -28.39 \(\pm\) 9.27 & 169 \(\pm\) 15.44 & -16.47 \(\pm\) **135** \\ \hline
**IBM** & **16.46 \(\pm\) 9.10** & 96\(\pm\) 1.3 & **165** \(\pm\) **74** & **28.10 \(\pm\) 10.27** & 102 \(\pm\) 14 & **274** \(\pm\) **89** & **4.86 \(\pm\) 10.17** & **111** \(\pm\) 28** & **43** \(\pm\) **87** & **-145** \(\pm\) **20** & 102 \(\pm\) 14 & -274 \(\pm\) 89 \\ \hline \end{tabular}
\end{table}
Table 1: The comparison results of the proposed method and the benchmarks.
4. \(\mathbf{RL_{DS}}\) refers to a RL-based single-price level strategy proposed by [11].
5. \(\mathbf{DRL_{OS}}\) refers to a state-of-the-art RL-based multi-price level strategy proposed in [10]. The agent decides whether to retain one unit of volume on each price level, not allowing for volume distribution across all price levels.
Evaluation metricsWe adopt four financial metrics to assess the performance of a MM strategy:
* **Episodic PnL** is a natural choice to evaluate the profitability of a MM agent, since there is no notion of starting capital in MM procedure: \(EPnL_{T}=\sum_{t=1}^{T}PnL_{t}\).
* **Mean Absolute Position (MAP)** accounts for the inventory risk, defined as: \(MAP_{T}=\frac{\sum_{t=1}^{T}|z_{t}|}{\sum_{t=1}^{T}1\cdot 1(|z_{t}|>0)}\).
* **Return Per Trade (RPT)** evaluates the agent's capability of capturing the spread. It is normalized across different markets by the average market spread \(\overline{\delta^{m}}\). \[RPT_{T}=\left(\frac{\sum_{i\in A_{T}}p_{i}^{a}*n_{i}^{a}}{\sum_{i\in A_{T}}n_{ i}^{a}}-\frac{\sum_{j\in B_{T}}p_{j}^{b}*n_{j}^{b}}{\sum_{j\in B_{T}}n_{j}^{b}} \right)\bigg{/}\overline{\delta^{m}}.\]
* **PnL-to-MAP Ratio (PnLMAP)** simultaneously considers the profitability and the incurred inventory risk of a market making strategy: \(PnLMAP_{T}=\frac{EPnL_{T}}{MAP_{T}}\).
### Comparison Results with Baselines
For a fair comparison, we tune the hyper-parameters of these methods for the maximum PnLMAP\({}_{T}\) value on the validation dataset. The comparison results of IMM and the benchmarks on the four test datasets are given in Table 1 and supplementary materials. These comparison results indicate that the proposed approach significantly outperforms the benchmarks in terms of both profitability and risk management.
As demonstrated in Table 1, on the RB dataset, the proposed method attains the highest terminal wealth as well as risk-ajusted return, albeit with a slightly elevated MAP in comparison to the expert and the two RL-based methods. Besides, the two multi-price level RL-based agents \(DRL_{OS}\) and IMM outperform the single-price level method \(RL_{DS}\), indicating the superiority of mult-price level strategy. On the FU dataset, IMM not only achieves the highest terminal wealth but also demonstrates the most favorable return-to-risk performance and spread-capturing ability, while maintaining the second-lowest inventory level. The RL-based strategies achieve commendable performance compared to the rule-based strategies which fail to make profits in most trading days. Moreover, it is observed that IMM acquires a competitive MM strategy, which attains stable dividends with profits (the pink line) while sustaining the inventory at a tolerable low level (the compact blue region) in a violate market, as illustrated in the left part of Figure 5. The inventory fluctuates around zero, which is a desirable behaviour. Remarkably, since IMM does not force the agent to place opposite-side orders to clear its inventory, it is a quite appealing result to see IMM accomplishes automatic inventory control based on state-derived information. Even in the challenging MM tasks on CU and AG markets which show lower market liquidity, the proposed method significantly outperforms the benchmarks in terms of the terminal wealth and risk-adjsted return.
### Model Component Ablation Study
To investigate the effectiveness of the model component, we compare the proposed IMM with its five variations summarized in Table 2, and the results are listed in Table 3.
Effectiveness of the State RepresentationsTo examine the efficacy of the proposed state representations, we analyse the performance of three \(\text{IMM}_{SL(\cdot)}\) models. Based on Table 3, it is evident that introducing multi-granularity signals as auxiliary observations holds significant importance in enhancing the MM strategy's performance. Figure 6 visualizes the behaviour of IMM during two 1-minute periods with different trends on FU test dataset. As demonstrated in Figure 6(a), benefiting from the auxiliary signals, the IMM agent anticipates an ascending price trend and proactively maintains a long position prior to the onset of a short-term bullish trend (steps 0-40). With the trend terminates (step 70-120), the agent gradually reduces its inventory through placing orders with narrow ask-side half-spread and broad bid-side one w.r.t the market midprice. Similarly, the IMM agent demonstrates proficient behavior in downside markets as shown in 6(b). Combined with the adverse select ratio depicted in Figure 7, it can be concluded that IMM has learned to mitigated adverse selection risk.
For a deeper investigation into the role of auxiliary signals in improving performance, we calculated the adverse selection ratio as \(adv\_ratio=\frac{\#\text{ adverse fills}}{\#\text{ills in last interval}}\). Here adverse fills
Figure 5: Performance of IMM and \(\text{IMM}_{-}PnL\) on FU on Jun. 14th, 2022.
Figure 6: 1-minute Cases. IMM performs well in both up- and down-trend markets.
refers to limit bid (ask) orders which are executed shortly before a downward (upward) movement of the best bid (ask) price. As if the best bid price would have gone down, it might have better to wait the next bid price level (Chung et al., 2022). Based on Figure 7(a), we can deduce that the multi-granularity predictive signals play a vital role in mitigating adverse selections. That might because they provide effective information about market conditions, which enables more flexible trade-off between spread-capturing and trend-chasing. Moreover, Figure 7(b) demonstrates that the information regarding multi-price level orders additionally contributes to enhancing the fills count by minimizing frequent cancellations and preserving queue positions.
Effectiveness of the IRLUThe comparison results between IMM\({}_{BC(\cdot)}\) and IMM provide empirical evidence of the importance of extracting additional knowledge from the expert and conducting efficient exploration, particularly for challenging financial tasks. Besides, IMM outperforms the expert LTIIC strategy a lot. As the RL agent faces challenges in identifying a viable trading approach and sustaining it over multiple steps during the initial training phase. Training while pursuing the imitation learning objective facilitates the agent in obtaining favorable rewards and drawing valuable lessons from these experiences.
Effects of Reward FunctionsTo investigate whether the proposed reward function meets different utilities, we train IMM that with three different rewards: The IMM\({}_{PNL}\) mehtod trains IMM using the PnL reward (\(\eta,\beta=0\)); The IMM\({}_{PNL+C}\) method trains IMM with the combination of the PNL and compensation reward (\(\eta=0,\beta>0\)); The IMM\({}_{PNL+IP}\) method trains IMM with the combination of the PNL and truncated inventory penalty reward (\(\eta>0,\beta=0\)). The hyperparameters resulting in the maximum PnLMAP value on the validation dataset are chosen for these models. The results on the FU dataset are outlined in Table 4. Here the metric #T signifies the number of fills normalized by the episode length.
As shown in Table 4, the IMM\({}_{PNL}\) strategy tends to have the most substantial inventory risk exposure. We depict an example of the intra-day performance of the IMM\({}_{PNL}\) policy in the right part of Figure 5. We observe that the IMM\({}_{PNL}\) agent learns to chase trends through maintaining a large inventory (\(>1000\)). This result in poor out-of-sample performance with large variance. Therefore, the truncated inventory penalty term proves crucial in curbing blind trend-chasing tendencies.
The IMM\({}_{PnL+C}\) strategy also grapples with elevated inventory risk, yet it achieves a greater number of transactions #T compared to IMM\({}_{PnL}\). The IMM\({}_{PnL+IP}\) strategy attains the highest average terminal wealth and the return per trade metric, but concurrently records the lowest #T, a circumstance that may be less advantageous for risk-averse market makers. The strategy trained with the proposed reward significantly improve the return-to-risk performance with the lowest MAPs, as well as a larger #T, compared to the IMM\({}_{PnL+IP}\) strategies. Although having the lowest average terminal wealth, the proposed IMM strategy acts very stably and might be the most favorable policy among these four policies for a risk-averse market maker. Besides, note that the proposed strategy has a largest #T, it could receive more compensation from the exchange.
## Conclusion
In this paper, we propose IMM, a novel RL-based approach aimed at efficiently learning multi-price level MM policies. IMM first introduces efficient state and action representations. Subsequently, it pre-train a SL-based prediction model to generate multiple trend signals as effective auxiliary observations. Futhermore, IMM utilizes a TCSA network to handle the temporal and spatial relationships in noisy financial data. Through abstracting trading knowledge from a sub-optimal expert meanwhile interacting with the environments, IMM explores the state and action spaces efficiently. Experiments on four futures markets demonstrate that IMM outperforms the benchmarks, and further ablation studies verify the effectiveness of the components in the proposed method.
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline & EPnL[10\({}^{3}\)] & MAP[unit] & PnLMAP & \#T \\ \hline IMM\({}_{PnL}\) & \(58.76\pm 94.43\) & \(2156\pm 655\) & \(31\pm 48\) & \(4.43\pm 0.94\) \\ IMM\({}_{PnL+C}\) & \(42.86\pm 123.04\) & \(2041\pm 465\) & \(27\pm 68\) & \(4.85\pm 1.09\) \\ IMM\({}_{PnL+IP}\) & \(\mathbf{73.07\pm 53.83}\) & \(756\pm 289\) & \(90\pm 46\) & \(4.42\pm 0.96\) \\ IMM & \(\mathbf{28.097\pm 10.27}\) & \(\mathbf{103\pm 15}\) & \(\mathbf{274\pm 89}\) & \(\mathbf{5.15\pm 1.19}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance of IMM variations trained with different reward preferences on FU dataset.
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline Models & QuotesInfo & Signals & TCSA & RL & IL \\ \hline IMM\({}_{SL(m)}\) & O & O & X & O & O \\ IMM\({}_{SL(s)}\) & O & X & O & O & X \\ IMM\({}_{SL(q)}\) & X & O & O & O & O \\ IMM\({}_{BC(0)}\) & O & O & O & O & X \\ IMM\({}_{BC(1)}\) & O & O & O & X & O \\ \hline \hline \end{tabular}
\end{table}
Table 2: Five variations of the proposed IMM.
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline & EPnL[10\({}^{3}\)] & MAP[unit] & PnLMAP & SR \\ \hline IMM\({}_{SL(m)}\) & \(10.57\pm 8.63\) & \(74\pm 41\) & \(142\pm 39\) & \(1.22\) \\ IMM\({}_{SL(s)}\) & \(7.83\pm 3.64\) & \(\mathbf{49\pm 5}\) & \(159\pm 46\) & \(2.15\) \\ IMM\({}_{SL(q)}\) & \(10.20\pm 9.72\) & \(74\pm 47\) & \(104\pm 56\) & \(1.05\) \\ IMM\({}_{DC(0)}\) & \(14.67\pm 5.11\) & \(85\pm 5\) & \(172\pm 57\) & \(\mathbf{2.87}\) \\ IMM\({}_{BC(1)}\) & \(8.22\pm 3.70\) & \(51\pm 4\) & \(156\pm 61\) & \(2.22\) \\ IMM & \(\mathbf{28.097\pm 10.27}\) & \(103\pm 15\) & \(\mathbf{274\pm 89}\) & \(2.80\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison results of ablation study on FU dataset.
Figure 7: The daily adverse selection ratios and normalized number of fills. | 市場形成(MM)は、市場Liquidityの確保に重要な機能を果たすため、金融取引において注目を集めています。優れた順序的決断能力を持つReinforcement Learning(RL)技術は、定量取引において顕著な成功を収めてきました。しかし、既存のRLベースのMM手法の大半は、単一価格レベルの戦略に焦点を当てており、頻繁な注文キャンセルとキュー優先度の喪失に失敗しています。複数の価格レベルを含む戦略は、実際の取引シナリオに適しています。しかし、複数の価格レベルを含む戦略は、複雑なため、MMのために利益をもたらすRLエージェントを効果的に訓練することが難しい課題を抱えています。プロフェッショナルの市場形成者による効率的なワークフローをインスピレーションに、私たちは、ImitativeMarket Maker(IMM)という、サブ最適な信号に基づく専門家の知識と直接のポリシー相互作用を、効率的な多価格レベルMM戦略 |
2308.01082 | Black hole thermodynamics in Horndeski theories | We investigate thermodynamics of static and spherically symmetric black holes
(BHs) in the Horndeski theories. Because of the presence of the
higher-derivative interactions and the nonminimal derivative couplings of the
scalar field, the standard Wald entropy formula may not be directly applicable.
Hence, following the original formulation by Iyer and Wald, we obtain the
differentials of the BH entropy and the total mass of the system in the
Horndeski theories, which lead to the first-law of thermodynamics via the
conservation of the Hamiltonian. Our formulation covers the case of the static
and spherically symmetric BH solutions with the static scalar field and those
with the linearly time-dependent scalar field in the shift-symmetric Horndeski
theories. We then apply our results to explicit BH solutions in the Horndeski
theories. In the case of the conventional scalar-tensor theories and the
Einstein-scalar-Gauss-Bonnet theories, we recover the BH entropy obtained by
the Wald entropy formula. In the shift-symmetric theories, in the case of the
BH solutions with the static scalar field we show that the BH entropy follows
the ordinary area law even in the presence of the nontrivial profile of the
scalar field. On the other hand, in the case of the BH solutions where the
scalar field linearly depends on time, i.e., the stealth Schwarzschild and
Schwarzschild-(anti-) de Sitter solutions, the BH entropy also depends on the
profile of the scalar field. By use of the entropy, we find that there exists
some range of the parameters in which Schwarzschild$-$(AdS) BH with non-trivial
scalar field is thermodynamically stable than Schwarzschild$-$(AdS) BH without
scalar field in general relativity. | Masato Minamitsuji, Kei-ichi Maeda | 2023-08-02T11:12:34 | http://arxiv.org/abs/2308.01082v2 | # Black hole thermodynamics in Horndeski theories
###### Abstract
We investigate thermodynamics of static and spherically symmetric black holes (BHs) in the Horndeski theories. Because of the presence of the higher-derivative interactions and the nonminimal derivative couplings of the scalar field, the standard Wald entropy formula may not be directly applicable. Hence, following the original formulation by Iyer and Wald, we obtain the differentials of the BH entropy and the total mass of the system in the Horndeski theories, which lead to the first-law of thermodynamics via the conservation of the Hamiltonian. Our formulation covers the case of the static and spherically symmetric BH solutions with the static scalar field and those with the linearly time-dependent scalar field in the shift-symmetric Horndeski theories. We then apply our results to explicit BH solutions in the Horndeski theories. In the case of the conventional scalar-tensor theories and the Einstein-scalar-Gauss-Bonnet theories, we recover the BH entropy obtained by the Wald entropy formula. In the shift-symmetric theories, in the case of the BH solutions with the static scalar field we show that the BH entropy follows the ordinary area law even in the presence of the nontrivial profile of the scalar field. On the other hand, in the case of the BH solutions where the scalar field linearly depends on time, i.e., the stealth Schwarzschild and Schwarzschild-(anti-) de Sitter solutions, the BH entropy also depends on the profile of the scalar field. By use of the entropy, we find that there exists some range of the parameters in which Schwarzschild\(-\)(AdS) BH with non-trivial scalar field is thermodynamically stable than Schwarzschild\(-\)(AdS) BH without scalar field in general relativity. Finally, we consider the Horndeski theories minimally coupled to the \(U(1)\)-invariant vector field where BH solutions contain the mass and the electric charge, and clarify the conditions under which the differential of the BH entropy is integrable in spite of the presence of the two independent charges.
## I Introduction
General relativity (GR) is known as the unique gravitational theory in four dimensions which only contains two degrees of freedom (DOFs) of metric and preserves the Lorentz symmetry [1]. GR has been tested by the local experiments as well as the astrophysical probes [2], while the future gravitational-wave (GW) astronomy [3] and black hole (BH) shadow measurements [4] will allow us to clarify gravitational physics in the so-called strong-field regimes as in the vicinity of BHs and neutron stars [5; 6; 7; 8]. On the other hand, standard cosmological model based on GR has been plagued by tensions of today's measurements [9; 10], which led to the question for the validity of GR on cosmological distance scales. In order to solve these tensions, the gravitational theories other than GR have been extensively studied [2; 5; 11; 12].
One of the simplest and most robust modifications to GR are provided by scalar-tensor (ST) theories which possess a scalar field (denoted by \(\phi\)) DOF as well as the metric tensor (denoted by \(g_{\mu\nu}\)) DOFs [13]. Traditionally, ST theories which include (non)canonical kinetic terms and/or nonminimal coupling to the spacetime curvature have been applied to inflationary universe and/or dark energy models (see e.g., Refs. [11; 12; 14; 15]). The framework of the ST theories have been extensively generalized by the (re)discovery of the Horndeski theories [16; 17; 18], which are known as the most general ST theories with second-order equations of motion, despite the existence of higher-derivative interactions of the scalar field \(\phi\) and the nonminimal derivative coupling to the spacetime curvature. The Horndeski theories are characterized by the four independent coupling functions \(G_{2,3,4,5}(\phi,X)\), where \(X:=-(1/2)g^{\mu\nu}\nabla_{\mu}\phi\nabla_{\nu}\phi\) represents the canonical kinetic term of the scalar field with \(\nabla_{\mu}\) being the covariant derivative associated with the metric \(g_{\mu\nu}\). The framework of the Horndeski theories has been extended to the Degenerate-Higher-Order-Scalar-Tensor (DHOST) theories [19; 20] and beyond-DHOST theories [21; 22; 23; 24], which eliminate the Ostrogradski ghosts by imposing the degeneracy conditions among the higher-derivative equations of motion. The existence of BH solutions and their properties will be very important in distinguishing such new class of ST theories from the theoretical perspectives. This offers an interesting possibility for probing the possible deviation from GR in strong field regimes.
In GR, the uniqueness theorem states that an asymptotically flat, stationary and axisymmetric BH is described by the Kerr solution, which is characterized only by mass and angular momentum [25; 26; 27]. This is reduced to the Schwarzschild solution in the limit of static and spherically symmetric spacetime. The BH no-hair theorem, which
states that only the BH solutions are Schwarzschild or Kerr solutions in the case of vacuum spacetime. We can extend the theorem to the case with a scalar field, assuming an appropriate condition on the potential. It also holds for the various ST theories with a canonical scalar field \(\phi\)[28; 29], a generalized kinetic term [30], as well as a scalar field nonminimally coupled to the scalar curvature \(F(\phi)R\)[31; 32; 33; 34]. In the shift-symmetric Horndeski theories which are invariant under the constant shift transformation \(\phi\rightarrow\phi+c\), where the functions \(G_{2,3,4,5}\) depend only on \(X\), Ref. [35] showed that a no-hair result of static and spherically symmetric BH solutions holds under the following hypotheses: (i) the scalar field shares the same symmetry as the static and spherically symmetric metric; (ii) the spacetime is asymptotically flat with a vanishing radial derivative \(\psi^{\prime}(r)\to 0\) at spatial infinity (\(r\rightarrow\infty\)); (iii) the norm of the Noether current associated with the shift symmetry \(J_{\mu}J^{\mu}\) is finite on the BH event horizon; (iv) a canonical kinetic term \(X\) is present in the Lagrangian; (v) the \(X\)-derivatives of \(G_{2,3,4,5}\) contain only positive or zero powers of \(X\). If we violate at least one of the conditions given above, it is possible to realize hairy BH solutions endowed with nontrivial scalar hair. The no-hair theorem for the static and spherically symmetric BH solutions has been extended to the case of the shift-symmetric beyond-Horndeski theories in Ref. [36]. The no-hair theorem in the shift-symmetric Horndeski theories for BH solutions has been generalized to the case of the stationary and axisymmetric BHs in Ref. [37].
For a scalar field with the linear dependence on time \(t\) of the form \(\phi=qt+\psi(r)\) with \(q\) being constant, which evades the hypothesis (i), there exist the stealth Schwarzschild solution [38; 39; 40] and the BH solutions with asymptotically (anti-)de Sitter [(A)dS] spacetimes [38; 41]. If the asymptotic flatness of spacetime is not imposed, which evades the hypothesis (ii), the linear quartic derivative coupling \(X\) in \(G_{4}\) gives rise to the exact hairy BH solutions with an asymptotic geometry mimicking the Schwarzschild-AdS spacetime [42; 43; 44; 45]. For the coupling \(G_{5}\propto\ln|X|\), which is equivalent to the linear coupling to the Gauss-Bonnet (GB) term \(\phi R_{\rm GB}^{2}\)[46], where
\[R_{\rm GB}^{2}:=R^{2}-4R_{\mu\nu}^{2}+R_{\mu\nu\alpha\beta}^{2}, \tag{1}\]
is the GB term, there exists the asymptotically-flat hairy BH solution whose metric components are corrected by the GB coupling [47; 48]. There also exists asymptotically-flat BH solution in the model where \(G_{4}(X)\supset(-X)^{1/2}\)[49]. These solutions arise from the violation of the hypothesis (v). We note that there also exist the hairy BH solutions for non-shift-symmetric GB couplings \(e^{-c\phi}R_{\rm GB}^{2}\) with \(c\) being constant [50; 51; 52] and for BH scalarization models which occur for \(Z_{2}\) symmetric coupling functions [53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68].
The linear stability analysis of the static and spherically symmetric BH solutions in the Horndeski theories have been performed in the literature, e.g., [69; 70; 71]. These linear stability conditions have been applied to various static and spherically symmetric BH solutions with the nontrivial profile of the scalar field in the Horndeski theories in Refs. [72; 73]. In generic Horndeski theories, static and spherically symmetric BH solutions with a non-vanishing constant kinetic term on the horizon \(X\neq 0\) inevitably suffer from a ghost or gradient instability [73] including the solutions discussed in Refs. [42; 43; 44; 45]. On the other hand, it was shown that within the perturbative regime the static and spherically symmetric BH solutions in ST theories with the power-law couplings to the GB invariant are free from the ghost or gradient instability, which include asymptotically-flat BH solutions in the shift-symmetric theory with the linear coupling to the GB invariant \(G_{5}(X)\propto\ln|X|\)[47; 48]. However, in the models where the scalar field linearly depends on time, e.g., the stealth Schwarzschild solution [38; 41], the standard linear perturbation analysis cannot be applied because the perturbations become infinitely strongly coupled [74; 75]. Thus, we will apply another way to see BH stability, that is BH thermodynamics. When we have two BH solutions, we can compare their entropies and then argue that the BH solution with smaller entropy is more unstable than the other BH solution with larger entropy. In this paper, we will focus on BH thermodynamics in the Horndeski theories. Although the Wald entropy formula [76] has been useful for computing the BH entropy in the covariant gravitational theories which contain the dependence on the Riemann tensor, this may not be directly applicable to the Horndeski theories because of the presence of the derivative interactions of the scalar field and the nonminimal derivative couplings of the scalar field to the spacetime curvature tensors [77; 78]. The terms which contain the spacetime curvature tensors may be replaced with the higher-derivatives of the scalar field with use of the properties of the Riemann tensor and the partial integration of the action. The apparent dependence of the action on the spacetime curvature tensors may be modified before and after a partial integration, although the action after the partial integration is equivalent to that before the partial integration under the assumption of no contribution from the boundaries. In this work, following the original formulation by Iyer and Wald [79], we will construct the BH thermodynamics in the Horndeski theories from the first principle. Since the Horndeski theories preserve the four-dimensional diffeomorphism invariance, there exists the associated Noether charge potential whose explicit form was obtained in Ref. [80]. Since the Noether charge potential is independent of the apparent modification of the action by the partial integration, this should be able to provide the unique description of the first-law of BH thermodynamics and the BH entropy. Iyer and Wald showed that the variation of the Hamiltonian is given by that of the Noether charge potential evaluated on the boundaries, i.e., in our case, the BH event horizon and spatial infinity [79]. The conservation of the total Hamiltonian of the BH system reproduces the first law of the BH thermodynamics. Our theorem will be able to be applied to both the shift-symmetric and non-shift-symmetric subclass of the Horndeski theories. Previously, for a subclass of the Horndeski theories with
the nonminimal coupling to the Einstein tensor \(G^{\mu\nu}\nabla_{\mu}\phi\nabla_{\nu}\phi\), the Iyer-Wald formulation has been applied for the BH solutions without the electric field in Refs. [77; 78] and with the electric field in Ref. [81]. The Iyer-Wald formulation has also been applied to the planar BH solutions in some classes of the Horndeski theories in arbitrary dimensions [82]. Our analysis will cover the whole Horndeski theories and be able to apply all the static and spherically symmetric BH solutions including those with the linearly time-dependent scalar field in the shift-symmetric theories [38; 41].
The paper is constructed as follows: In Sec. II, we apply the formulation by Iyer and Wald to the Horndeski theories. In Sec. III, we discuss the entropy and mass for the static and spherically symmetric BH solutions with the static scalar field in the Horndeski theories. In Sec. IV, we discuss the entropy and mass of the system for the BH solutions with the linearly time-dependent scalar field in the shift-symmetric Horndeski theories. In Sec. V, we investigate thermodynamical stability of the stealth Schwarzschild BH solutions and the Schwarzschild-(A)dS BH solutions with the linear time dependence in the shift-symmetric Horndeski theories which are discussed in Sec. IV. In Sec. VI, we consider the Horndeski theories minimally coupled to the \(U(1)\)-invariant vector field where BH solutions contain the mass and the electric charge, and clarify the conditions under which the differential of the BH entropy is integrable in spite of the presence of the two independent charges. The last Sec. VII is devoted to giving a brief summary and conclusion.
## II Iyer-Wald formulation in the Horndeski theories
### The Horndeski theories
We consider the Horndeski theories [16; 17; 18] whose action is composed of the four independent parts
\[S = \int d^{4}x\sqrt{-g}\mathcal{L}=\int d^{4}x\sqrt{-g}\sum_{i=2}^{5} \mathcal{L}_{i}, \tag{2}\]
with the Lagrangian densities given by
\[\mathcal{L}_{2} := G_{2}(\phi,X), \tag{3}\] \[\mathcal{L}_{3} := -G_{3}(\phi,X)\Box\phi,\] (4) \[\mathcal{L}_{4} := G_{4}(\phi,X)R+G_{4X}(\phi,X)\left[\left(\Box\phi\right)^{2}- \left(\phi^{\alpha\beta}\phi_{\alpha\beta}\right)\right],\] (5) \[\mathcal{L}_{5} := G_{5}(\phi,X)G_{\mu\nu}\phi^{\mu\nu}-\frac{1}{6}G_{5X}(\phi,X) \left[(\Box\phi)^{3}-3\Box\phi\left(\phi^{\alpha\beta}\phi_{\alpha\beta} \right)+2\phi_{\alpha}{}^{\beta}\phi_{\beta}{}^{\rho}\phi_{\rho}{}^{\alpha} \right], \tag{6}\]
where \(g_{\mu\nu}\) is the spacetime metric, \(R\) and \(G_{\mu\nu}\) are the Ricci scalar and Einstein tensor associated with the metric \(g_{\mu\nu}\), respectively, \(\phi\) is the scalar field, \(\phi_{\mu}=\nabla_{\mu}\phi\), \(\phi_{\mu\nu}=\nabla_{\mu}\nabla_{\nu}\phi\), and so on are the short-hand notation for the covariant derivatives of the scalar field, with \(\nabla_{\mu}\) being the covariant derivative associated with the metric \(g_{\mu\nu}\). \(X\) represents the canonical kinetic term \(X:=-(1/2)g^{\mu\nu}\phi_{\mu}\phi_{\nu}\) with use of the short-hand notation, and \(G_{2,3,4,5}(\phi,X)\) are the free functions of \(\phi\) and \(X\). We also define \(\phi^{\mu}{}_{\nu}:=g^{\mu\alpha}\phi_{\alpha\nu}\), \(\phi^{\mu\nu}:=g^{\nu\alpha}\phi^{\mu}{}_{\alpha}\), and \(\Box\phi:=g^{\mu\nu}\phi_{\mu\nu}\).
### The Noether charge potential associated with the diffeomorphism invariance
The variation of the action (2) is given by
\[\delta S=\int d^{4}x\sqrt{-g}\left(E_{\mu\nu}\delta g^{\mu\nu}+E_{\phi}\delta \phi+\nabla_{\mu}J^{\mu}\right), \tag{7}\]
where the equations of motion of the metric and scalar field, \(E_{\mu\nu}=0\) and \(E_{\phi}=0\), respectively, can be found in Ref. [18] for instance, and the boundary current is given by [80]
\[J^{\mu}=\sum_{i=2}^{5}J^{\mu}_{(i)}, \tag{8}\]
which is composed of the parts from the Lagrangians (3)-(6)
\[J^{\mu}_{(2)} = -G_{2X}\phi^{\mu}\delta\phi, \tag{9}\]
\[J^{\mu}_{(3)} = -\frac{1}{2}G_{3}\left(\mathfrak{h}\phi^{\mu}-2\mathfrak{h}^{\mu\nu} \phi_{\nu}+2\nabla^{\mu}\delta\phi\right)+\delta\phi G_{3X}\square\phi\phi^{\mu}+ \delta\phi\nabla^{\mu}G_{3}, \tag{10}\] \[J^{\mu}_{(4)} = G_{4X}\square\phi\left(\mathfrak{h}\phi^{\mu}-2\mathfrak{h}^{\mu \nu}\phi_{\nu}\right)+2G_{4X}\square\phi\nabla^{\mu}(\delta\phi)-2\nabla^{\mu} \left(G_{4X}\square\phi\right)\delta\phi\] (11) \[+G_{4XX}\left(\phi^{\alpha\beta}\phi_{\alpha\beta}-(\square\phi)^ {2}\right)\phi^{\mu}\delta\phi+G_{4X}\left(2\phi^{\mu\rho}\phi^{\sigma}-\phi^{ \rho\sigma}\phi^{\mu}\right)\mathfrak{h}_{\rho\sigma}-2G_{4X}\phi^{\mu\nu} \nabla_{\nu}\delta\phi\] \[+2\nabla_{\nu}\left(G_{4X}\phi^{\mu\nu}\right)\delta\phi- \mathfrak{h}^{\mu\nu}\nabla_{\nu}G_{4}+G_{4}\nabla_{\nu}\mathfrak{h}^{\mu \nu}+\mathfrak{h}\nabla^{\mu}G_{4}-G_{4}\nabla^{\mu}\mathfrak{h}-G_{4X}R\phi^ {\mu}\delta\phi,\] \[J^{\mu}_{(5)} = \frac{1}{4}G_{5X}\phi^{\alpha\beta}\phi_{\alpha\beta}\left(\mathfrak {h}\phi^{\mu}-2\mathfrak{h}^{\mu\nu}\phi_{\nu}\right)-\frac{1}{2}\mathfrak{h} _{\rho\sigma}G_{5X}\square\phi\left(2\phi^{\sigma\mu}\phi^{\rho}-\phi^{\sigma \rho}\phi^{\mu}\right)\] (12) \[-\frac{1}{4}G_{5X}(\square\phi)^{2}\left(\mathfrak{h}\phi^{\mu}+2 \nabla^{\mu}\delta\phi-2\mathfrak{h}^{\mu\nu}\phi_{\nu}\right)+\frac{1}{2} \nabla^{\mu}\left[G_{5X}(\square\phi)^{2}\right]\delta\phi\] \[+\frac{1}{2}G_{5X}\phi^{\alpha\beta}\phi_{\alpha\beta}\nabla^{\mu }\delta\phi-\frac{1}{2}\delta\phi\nabla^{\mu}\left[G_{5X}\left(\phi_{\alpha \beta}\right)^{2}\right]+G_{5X}\square\phi\phi^{\mu\nu}\nabla_{\nu}\delta\phi\] \[-\delta\phi\nabla_{\nu}\left(G_{5X}\square\phi\phi^{\mu\nu}\right) -G_{5X}\phi^{\mu\sigma}\phi_{\nu\sigma}\nabla^{\nu}\delta\phi+\delta\phi \nabla^{\nu}\left(G_{5X}\phi^{\mu\sigma}\phi_{\nu\sigma}\right)\] \[+\frac{1}{6}G_{5XX}\left[\left(\square\phi\right)^{3}-3\square\phi (\phi_{\alpha\beta})^{2}+2(\phi_{\alpha\beta})^{3}\right]\phi^{\mu}\delta \phi+\frac{1}{2}G_{5X}\phi^{\nu}_{\sigma}\left(2\phi^{\sigma\mu}\phi^{\rho}- \phi^{\sigma\rho}\phi^{\mu}\right)\mathfrak{h}_{\rho\nu}\] \[-\mathfrak{h}_{\rho\sigma}\nabla^{\sigma}\left(G_{5}\phi^{\mu \rho}\right)+G_{5}\phi_{\sigma\rho}\nabla^{\sigma}\mathfrak{h}^{\rho\mu}- \frac{1}{2}G_{5}\phi_{\rho\sigma}\nabla^{\mu}\mathfrak{h}^{\rho\sigma}+\frac{ 1}{2}\mathfrak{h}^{\rho\sigma}\nabla^{\mu}\left(G_{5}\phi_{\rho\sigma}\right) -\frac{1}{2}G_{5}\phi^{\mu\nu}\nabla_{\nu}\mathfrak{h}+\frac{1}{2}\mathfrak{h} \nabla_{\nu}\left(G_{5}\phi^{\mu\nu}\right)\] \[-\frac{1}{2}G_{5}\square\phi\nabla_{\rho}\mathfrak{h}^{\rho\mu}+ \frac{1}{2}\mathfrak{h}^{\rho\mu}\nabla_{\rho}\left(G_{5}\square\phi\right)+ \frac{1}{2}G_{5}\square\phi\nabla^{\mu}\mathfrak{h}-\frac{1}{2}\mathfrak{h} \nabla^{\mu}\left(G_{5}\square\phi\right)-G_{5}\mathfrak{h}_{\rho\sigma}G^{ \mu\rho}\phi^{\sigma}+\frac{1}{2}G_{5}\mathfrak{h}_{\rho\sigma}G^{\rho\sigma} \phi^{\mu}\] \[-\delta\phi G^{\mu\nu}\nabla_{\nu}G_{5}+G_{5}G^{\mu\nu}\nabla_{\nu }\delta\phi-\delta\phi G_{5X}G^{\rho\sigma}\phi_{\rho\sigma}\phi^{\mu},\]
where we have defined the variation of the metric tensor with respect to the independent integration constants by
\[\mathfrak{h}_{\mu\nu}=\delta g_{\mu\nu},\qquad\mathfrak{h}^{\mu\nu}=g^{\mu\rho} g^{\nu\sigma}\mathfrak{h}_{\rho\sigma},\qquad\mathfrak{h}=g^{\rho\sigma} \mathfrak{h}_{\rho\sigma}. \tag{13}\]
We also define the dual 3-form to \(J^{\mu}\) by
\[\Theta_{\alpha\beta\gamma} := J^{\mu}\varepsilon_{\mu\alpha\beta\gamma}=\varepsilon_{\alpha \beta\mu\gamma}J^{\mu}=\sum_{i=2}^{5}\varepsilon_{\alpha\beta\mu\gamma}J^{\mu}_{ (i)}. \tag{14}\]
Since the Horndeski theories (2) with Eqs. (3)-(6) are invariant under the four-dimensional diffeomorphism transformation, \(x^{\mu}\to x^{\mu}+\xi^{\mu}(x^{\mu})\), there exists the associated Noether charge potential. Under the diffeomorphism transformation, the variations of the metric and scalar field are, respectively, given by
\[\mathfrak{h}^{(\xi)}_{\mu\nu} = \delta_{\xi}g_{\mu\nu}:=\mathcal{L}_{\xi}g_{\mu\nu}=2\nabla_{(\mu} \xi_{\nu)},\qquad\delta_{\xi}\phi:=\mathcal{L}_{\xi}\phi=\xi^{\mu}\phi_{\mu}, \tag{15}\]
and with use of the on-shell gravitational equations of motion \(E_{\mu\nu}=0\), \(J^{\mu}_{(\xi)}\) can be written in terms of the total derivative of the Noether charge potential \(K^{\mu\nu}_{(\xi)}\), i.e.,
\[J^{\mu}_{(\xi)}-\xi^{\mu}\mathcal{L}=2\nabla_{\nu}K^{[\nu\mu]}_{(\xi)}=2\sum_{i=2 }^{5}\nabla_{\nu}K^{[\nu\mu]}_{(i)(\xi)}, \tag{16}\]
where each individual contribution is given by
\[K^{\mu\nu}_{(2)(\xi)} = 0, \tag{17}\] \[K^{\mu\nu}_{(3)(\xi)} = -G_{3}\xi^{\mu}\phi^{\nu},\] (18) \[K^{\mu\nu}_{(4)(\xi)} = 2G_{4X}\left[\square\phi\xi^{\mu}\phi^{\nu}-\xi_{\sigma}\phi^{\sigma \mu}\phi^{\nu}\right]+2\xi^{\mu}\nabla^{\nu}G_{4}+G_{4}\nabla^{\mu}\xi^{\nu},\] (19) \[K^{\mu\nu}_{(5)(\xi)} = -\frac{1}{2}G_{5X}\left[\left(\square\phi^{2}-\phi^{\alpha\beta} \phi_{\alpha\beta}\right)\xi^{\mu}\phi^{\nu}+2\left(\xi^{\rho}\phi_{\rho\sigma}- \square\phi\xi_{\sigma}\right)\phi^{\sigma\mu}\phi^{\nu}\right]\] (20) \[+\xi^{\mu}\nabla_{\sigma}\left(\phi^{\nu\sigma}G_{5}\right)-\xi_{ \sigma}\nabla^{\mu}\left(\phi^{\nu\sigma}G_{5}\right)-\xi^{\mu}\nabla^{\nu} \left(G_{5}\square\phi\right)+\frac{1}{2}G_{5}\left(2\xi_{\sigma}G^{\sigma\mu} \phi^{\nu}-2(\nabla_{\sigma}\xi^{\mu})\phi^{\nu\sigma}-\square\phi\nabla^{\mu} \xi^{\nu}\right).\]
We then define the dual 2-form of the Noether charge potential \(K^{\mu\nu}_{(\xi)}\)[80]
\[Q_{(\xi)\alpha\beta}:=-\epsilon_{\alpha\beta\mu\nu}K^{\mu\nu}_{(\xi)}=\sum_{i=2}^{5}Q ^{(i)}_{(\xi)\alpha\beta}. \tag{21}\]
We also define the 2-form tensor where the first index of \(\Theta_{\nu\alpha\beta}\) defined in Eq. (14) is contracted by the infinitesimal differmorphism transformation \(\xi^{\nu}\), by
\[i_{\xi}\Theta_{\alpha\beta}\ :=\ \xi^{\nu}\Theta_{\nu\alpha\beta}=-\varepsilon_{ \alpha\mu\beta\nu}J^{\mu}\xi^{\nu}=\varepsilon_{\alpha\beta\mu\nu}J^{\mu}\xi^ {\nu}. \tag{22}\]
We now consider the variation of the dual Noether charge potential with respect to the physical parameters subtracted by Eq. (22)
\[\delta Q_{(\xi)\alpha\beta}-i_{\xi}\Theta_{\alpha\beta}=-\left( \delta\left(\varepsilon_{\alpha\beta\mu\nu}K^{\mu\nu}_{(\xi)}\right)+ \varepsilon_{\alpha\beta\mu\nu}J^{\mu}\xi^{\nu}\right)=-\sum_{i=2}^{5}\left( \delta\left(\varepsilon_{\alpha\beta\mu\nu}K^{\mu\nu}_{(i)(\xi)}\right)+ \varepsilon_{\alpha\beta\mu\nu}J^{\mu}_{(i)}\xi^{\nu}\right). \tag{23}\]
The integration of Eq. (23) on the boundaries of the Cauchy surface gives rise to the variation of the Hamiltonian [76; 79].
### Static and spherically symmetric black hole solutions
We consider the static and spherically symmetric solutions whose metric is written by
\[ds^{2}\ =\ -h(r)dt^{2}+\frac{dr^{2}}{f(r)}+r^{2}\gamma_{ab}d\theta^{a}d \theta^{b}, \tag{24}\]
where \(t\) and \(r\) are the temporal and radial coordinates, and \(\gamma_{ab}d\theta^{a}d\theta^{b}:=d\theta^{2}+\sin^{2}\theta d\varphi^{2}\) represents the metric of the unit two-sphere. We assume that the spacetime contains the event horizon at \(r=r_{g}\) where
\[h(r_{g})=f(r_{g})=0,\qquad\lim_{r\to r_{g}}\frac{f(r)}{h(r)}=\text{const}. \tag{25}\]
In the case that \(h(r)\) and \(f(r)\) have several roots, we assume that \(r_{g}\) corresponds to the largest positive root, and in the entire domain outside the event horizon \(r_{g}<r<\infty\) the two metric functions \(f(r)\) and \(h(r)\) are regular and positive. For the scalar field, we will consider the two ansatze; the static ansatz (31) or the ansatz with the linear time dependence (35), where the latter can be applied only for the shift-symmetric Horndeski theories. In this background (24), we assume that \(\xi^{\mu}\) corresponds to the timelike Killing vector field, \(\xi^{\mu}=(1,0,0,0)\).
The variations of the metric and scalar field of a given BH solution can be written in terms of those of the integration constants
\[\mathfrak{h}_{tt}=-\delta h=-\sum_{j}\frac{\partial h}{\partial c _{j}}\delta c_{j},\quad\mathfrak{h}_{rr}=-\frac{\delta f}{f^{2}}=-\frac{1}{f^{ 2}}\sum_{j}\frac{\partial f}{\partial c_{j}}\delta c_{j},\quad\mathfrak{h}_{ ab}=0,\quad\delta\phi=\sum_{j}\frac{\partial\phi}{\partial c_{j}}\delta c_{j}, \tag{26}\]
where \(c_{j}\)'s are integration constants of the BH solutions, for instance, the position of the event horizon \(r_{g}\).
As shown in Refs. [76; 79], with use of Eq. (23), the variation of the Hamiltonian with respect to the integration constants in a specific solution is given by the contributions from the boundaries, i.e., the horizon \(r\to r_{g}\) and infinity \(r\to\infty\),
\[\delta\mathcal{H} :=\ \delta\mathcal{H}_{\infty}-\delta\mathcal{H}_{H} \tag{27}\] \[=\] \[= -\int d\Omega\sum_{i=2}^{5}\left(\delta\left(r^{2}\sqrt{\frac{h}{ f}}K^{[tr]}_{(i)(\xi)}\right)+r^{2}\sqrt{\frac{h}{f}}J^{[t}\xi^{r]}\right)\Big{|}_{r \to\infty}\] \[+\int d\Omega\sum_{i=2}^{5}\left(\delta\left(r^{2}\sqrt{\frac{h}{ f}}K^{[tr]}_{(i)(\xi)}\right)+r^{2}\sqrt{\frac{h}{f}}J^{[t}_{(i)}\xi^{r]} \right)\Big{|}_{r\to r_{g}},\]
where \(d\Omega:=\sin\theta d\theta d\varphi\) and the subscript '\(H\)' represents the quantities associated with the horizon. The variation of the Hamiltonian on the horizon and at the infinity can be identified with the variation of the total mass of the system \(M_{\rm H}\) and BH entropy \(S_{\rm H}\) in the Horndeski theories as
\[\delta\mathcal{H}_{\infty}=\delta M_{\rm H},\qquad\delta\mathcal{H}_{H}=T_{ \rm H(H)}\delta S_{\rm H}, \tag{28}\]
where \(T_{\mathsf{H}(\mathrm{H})}\) represents the Hawking temperature of the given BH solution
\[T_{\mathsf{H}(\mathrm{H})}:=\frac{\sqrt{h^{\prime}(r_{g})f^{\prime}(r_{g})}}{4 \pi}. \tag{29}\]
The conservation of the total Hamiltonian, \(\delta\mathcal{H}=0\), reproduces the first law of the BH thermodynamics in the Horndeski theories
\[T_{\mathsf{H}(\mathrm{H})}\delta S_{\mathrm{H}}=\delta M_{\mathrm{H}}. \tag{30}\]
We note that in some classes of the Horndeski theories GWs may propagate with the speeds different from the speed of light. In such a case, there was the argument that the Hawking temperature should be evaluated on the horizon of the effective metric for GWs, which are disformally related to the original metric \(g_{\mu\nu}\)[78]. Here, we choose the surface gravity for the original metric \(g_{\mu\nu}\) as the Hawking temperature (29) as in the case of GR. The first reason is because photons and other massless particles as the products of the Hawking evaporation would propagate along the light cones of the original metric \(g_{\mu\nu}\). The second reason is because there is no unique choice of the frames where GWs travel with the speed of light. Especially, the conformal transformation does not modify the speeds of GWs, but red-shifts or blue-shifts the Hawking temperature.
#### ii.2.1 The case of the static scalar field
We now compute the integrand of Eq. (27). First, we focus on the solution with the static scalar field
\[\phi=\psi(r). \tag{31}\]
Under the variation (26) the integrand of the variation of the Hamiltonian (27) is given by
\[-\delta\left(r^{2}\sqrt{\frac{h}{f}}K^{[tr]}_{(\xi)}\right)-r^{2 }\sqrt{\frac{h}{f}}J^{[t}\xi^{r]} \tag{32}\] \[= 2r^{2}\sqrt{\frac{h}{f}}\Big{\{}-\frac{1}{2}f\psi^{\prime}\delta \psi G_{2X}+f\psi^{\prime}\delta\psi G_{3\phi}+\frac{f\psi^{\prime 2}}{4rh} \left[f(4h+rh^{\prime})\delta\psi-rh(2f\delta\psi^{\prime}+\psi^{\prime} \delta f)\right]G_{3X}\] \[-\frac{\delta f}{r}G_{4}+\frac{1}{2}\left[f\left(-2\delta\psi^{ \prime}+\frac{h^{\prime}}{h}\delta\psi\right)-\psi^{\prime}\delta f\right]G_{ 4\phi}+\frac{f\psi^{\prime}}{r^{2}h}\left[\left(\left(f-1\right)h+rfh^{\prime }\right)\delta\psi-2rh(f\delta\psi^{\prime}+\psi^{\prime}\delta f)\right]G_{ 4X}\] \[-f\psi^{\prime}\delta\psi G_{4\phi\phi}+\frac{f\psi^{\prime 2}}{2rh} \left[-f(8h+rh^{\prime})\delta\psi+rh(2f\delta\psi^{\prime}+\psi^{\prime} \delta f)\right]G_{4\phi X}\] \[+\frac{f^{2}\psi^{\prime 3}}{r^{2}h}\left[-f(h+rh^{\prime}) \delta\psi+rh(2f\delta\psi^{\prime}+\psi^{\prime}\delta f)\right]G_{4XX}+ \frac{f\psi^{\prime}}{2r^{2}h}\left[-2\left(\left(f-1\right)h+rfh^{\prime} \right)\delta\psi+rh\left(4f\delta\psi^{\prime}+3\psi^{\prime}\delta f\right) \right]G_{5\phi}\] \[+\frac{f\psi^{\prime 2}}{4r^{2}h}\left[f^{2}(6h\delta\psi^{\prime}-3h^{ \prime}\delta\psi)-h\psi^{\prime}\delta f+f\left(h^{\prime}\delta\psi+h(-2 \delta\psi^{\prime}+5\psi^{\prime}\delta f)\right)\right]G_{5X}+\frac{f^{2} \psi^{\prime 2}\delta\psi}{r}G_{5\phi\phi}\] \[+\frac{f^{2}\psi^{\prime 3}}{2r^{2}h}\left[f(2h+rh^{\prime}) \delta\psi-rh(2f\delta\psi^{\prime}+\psi^{\prime}\delta f)\right]G_{5\phi X}- \frac{f^{3}\psi^{\prime 4}}{4r^{2}h}\left[f(2h\delta\psi^{\prime}-h^{\prime} \delta\psi)+h\psi^{\prime}\delta f\right]G_{5XX}\Big{\}}.\]
#### ii.2.2 The case of the scalar field with linear time dependence
Second, we consider the shift-symmetric Horndeski theories invariant under the constant shift \(\phi\rightarrow\phi+c\) with \(c\) being constant, which correspond to the theories without the dependence on \(\phi\) in the coupling functions:
\[G_{2}=G_{2}(X),\qquad G_{3}=G_{3}(X),\qquad G_{4}=G_{4}(X),\qquad G_{5}=G_{5}( X). \tag{33}\]
There is the Noether current associated with the shift symmetry
\[\mathcal{J}^{\mu}=\frac{1}{\sqrt{-g}}\left[\frac{\partial\mathcal{L}}{\partial \phi_{\mu}}-\nabla_{\nu}\left(\frac{\partial\mathcal{L}}{\partial\phi_{\mu\nu }}\right)\right]. \tag{34}\]
The theory (33) admits the static and spherically symmetric BH solutions with the linearly time-dependent scalar field [38; 39; 40; 41]1.
Footnote 1: Because of the linear time dependence, the ansatz for the scalar field (35) does not respect the symmetry of the spacetime, \(\mathcal{L}_{\xi}\phi\neq 0\), where \(\xi^{\mu}\) corresponds to the timelike Killing vector, while \(\mathcal{L}_{\xi}g_{\mu\nu}=0\). However, in deriving the variation of the Hamiltonian (27), the symmetry \(\mathcal{L}_{\xi}\phi=0\) is not imposed [79] and hence our formulation can be applied to the solutions with the scalar field Eq. (35)
\[\phi=qt+\psi(r). \tag{35}\]
For the metric ansatz (24), the radial component of the Noether current associated with the shift symmetry is given by
\[\mathcal{J}^{r} = -f\psi^{\prime}G_{2X}+\frac{f}{2rh^{2}}\left[-q^{2}rh^{\prime}+fh \left(4h+rh^{\prime}\right)\psi^{\prime 2}\right]G_{3X} \tag{36}\] \[+\frac{2f\phi^{\prime}}{r^{2}h}\left[(f-1)h+rfh^{\prime}\right] G_{4X}+\frac{2f^{2}\psi^{\prime}}{r^{2}h^{2}}\left[q^{2}rh^{\prime}-fh\left(h+ rh^{\prime}\right)\psi^{\prime 2}\right]G_{4XX}\] \[+ \frac{fh^{\prime}}{2r^{2}h^{2}}\left[q^{2}(f-1)+(1-3f)fh\psi^{ \prime 2}\right]G_{5X}+\frac{f^{3}h^{\prime}\psi^{\prime 2}}{2r^{2}h^{2}}\left(-q^{2}+ fh\psi^{\prime 2}\right)G_{5XX}.\]
For the given ansatz of the metric and scalar field, Eqs. (24) and (35), we can show that the \((t,r)\)-component of the metric equations is proportional to \(\mathcal{J}^{r}\)[38; 39; 40], and hence we have to impose
\[\mathcal{J}^{r}=0. \tag{37}\]
The variation of the scalar field is given by
\[\delta\phi=\delta\psi(r). \tag{38}\]
We note that since \(q\) is not the integration constant but the constant appearing in the ansatz of the scalar field compatible within the shift symmetry, we do not need to take the variation of \(q\) into consideration. Under the variation (26), the integrand of the variation of the Hamiltonian (27) is given by
\[-\delta\left(r^{2}\sqrt{\frac{h}{f}}K^{[tr]}_{(\xi)}\right)-r^{2} \sqrt{\frac{h}{f}}J^{[t}\xi^{r]} \tag{39}\] \[= 2r^{2}\sqrt{\frac{h}{f}}\Big{\{}-\frac{f\psi^{\prime}}{4h^{2}} \left[q^{2}\delta h+h^{2}\left(2f\psi^{\prime}\delta\psi^{\prime}+\psi^{ \prime 2}\delta f\right)\right]G_{3X}\] \[-\frac{\delta f}{r}G_{4}-\frac{2f\psi^{\prime}}{r}\left(f\delta \psi^{\prime}+\psi^{\prime}\delta f\right)G_{4X}+\frac{f^{2}\psi^{\prime 2}}{ rh^{2}}\left\{q^{2}\delta h+h^{2}\left(2f\psi^{\prime}\delta\psi^{\prime}+ \psi^{\prime 2}\delta f\right)\right\}G_{4XX}\] \[+\frac{f\psi^{\prime}}{4r^{2}h^{2}}\left\{q^{2}(f-1)\delta h+h^{ 2}\big{(}6f^{2}\psi^{\prime}\delta\psi^{\prime}-\psi^{\prime 2}\delta f+f\psi^{ \prime}(-2\delta\psi^{\prime}+5\psi^{\prime}\delta f)\big{)}\right\}G_{5X}\] \[-\frac{f^{3}\psi^{\prime 3}}{4r^{2}h^{2}}\left\{q^{2}\delta h+h^{2} \left(2f\psi^{\prime}\delta\psi^{\prime}+\psi^{\prime 2}\delta f\right)\right\}G_{5XX}+ \frac{1}{2}\delta\psi\mathcal{J}^{r}\Big{\}}.\]
We note that with the condition (37), the terms which are explicitly proportional to \(\delta\psi\) vanish.
## III Black holes with the static scalar field
In this section, we focus on several classes of the Horndeski theories giving rise to the BH solutions with the static scalar field (31).
### Gr
For GR with the cosmological constant \(\Lambda\)
\[G_{2}=-\frac{1}{8\pi G}\Lambda,\qquad G_{4}=\frac{1}{16\pi G},\qquad G_{3}=G _{5}=0, \tag{40}\]
Eq. (32) reduces to
\[-\delta\left(r^{2}\sqrt{\frac{h}{f}}K^{[tr]}_{(\xi)}\right)-r^{2}\sqrt{\frac{h}{f} }J^{[t}\xi^{r]} = -r^{2}\sqrt{\frac{h}{f}}\frac{\delta f}{8\pi Gr}. \tag{41}\]
In GR, the Schwarzschild-(A)dS solutions given by
\[f(r)=h(r)=1-\frac{r_{g}}{3r}\left(3-r_{g}^{2}\Lambda\right)-\frac{\Lambda}{3}r ^{2},\qquad\psi(r)=0. \tag{42}\]
are the unique static and spherically symmetric BH solution. Since \(r_{g}\) is only the integration constant, using Eq. (26), \(\delta f=\frac{\partial f}{\partial r_{g}}\delta r_{g}\) and \(\delta h=\frac{\partial h}{\partial r_{g}}\delta r_{g}\). Evaluating Eq. (28) with use of Eq. (27), we obtain the first law of thermodynamics
\[T_{\mathsf{H}(\mathrm{GR})}\delta S_{\mathrm{GR}}=\delta M_{\mathrm{GR}}=\frac {1}{2G}\left(1-r_{g}^{2}\Lambda\right)\delta r_{g}, \tag{43}\]
where the Hawking temperature (29) is given by \(T_{\mathsf{H}(\mathrm{GR})}=T_{0}(1-r_{g}^{2}\Lambda)\). Here we introduce the Hawking temperature of Schwarzschild BH in GR defined by
\[T_{0}:=\frac{1}{4\pi r_{g}}. \tag{44}\]
We shall also use the mass and BH entropy of Schwarzschild BH in GR given by
\[M_{0} := \frac{r_{g}}{2G}\,, \tag{45}\] \[S_{0} := \frac{\pi r_{g}^{2}}{G}\,, \tag{46}\]
as reference.
Thus, \(\delta S_{\mathrm{GR}}=\frac{2\pi r_{g}}{G}\delta r_{g}=\frac{1}{4G}\delta A_ {H}\), where \(A_{H}:=4\pi r_{g}^{2}\) is the area of the BH event horizon, and hence by integrating it we recover the area law
\[S_{\mathrm{GR}}=S_{0}\,, \tag{47}\]
where we set the integration constant so that we have the vanishing BH entropy \(S_{\mathrm{GR}}\to 0\) in the limit of the vanishing horizon radius \(r_{g}\to 0\). The mass of the system is also given by
\[M_{\mathrm{GR}}=M_{0}\left(1-\frac{1}{3}\Lambda r_{g}^{2}\right), \tag{48}\]
which coincides with the total mass of the BH, where we set the integration constant so that we have the vanishing mass \(M_{\mathrm{GR}}\to 0\) in the limit of the vanishing horizon radius \(r_{g}\to 0\).
### Scalar-tensor theory with nonminimal coupling
As the next simplest example, we consider the ST theory with nonminimal coupling to the scalar curvature
\[\mathcal{L}=\omega(\phi)\left(R-2V(\phi)\right)+\eta X, \tag{49}\]
which is equivalent to the Horndeski theory with
\[G_{2} = \eta X-2\omega(\phi)V(\phi),\qquad G_{4}=\omega(\phi),\qquad G_{3} =G_{5}=0, \tag{50}\]
where \(\omega(\phi)\) and \(V(\phi)\) are the nonminimal coupling function and the potential of the scalar field, respectively. Eq. (32) reduces to
\[-\delta\left(r^{2}\sqrt{\frac{h}{f}}K^{[tr]}_{(\xi)}\right)-r^{2}\sqrt{\frac{ h}{f}}J^{[t}\xi^{r]}\]
\[= 2r^{2}\sqrt{\frac{h}{f}}\Big{\{}\left(-\frac{\omega(\psi)}{r}-\frac{ \psi^{\prime}}{2}\omega^{(1)}(\psi)\right)\delta f-\frac{f}{2h}\left[h^{\prime} \omega^{(1)}(\psi)-h\psi^{\prime}(\eta+2\omega^{(2)}(\psi))\right]\delta\psi-f \omega^{(1)}(\psi)\delta\psi^{\prime}\Big{\}}, \tag{51}\]
where \(\omega^{(n)}(\phi)\) and \(V^{(n)}(\phi)\) denote the \(n(=1,2,\cdots)\)-th order derivatives of \(\omega(\phi)\) and \(V(\phi)\) with respect to \(\phi\). We assume that \(V(\phi)\) and \(\omega(\phi)\) have their local minima at \(\phi=0\), i.e., \(V^{(1)}(0)=0\) and \(\omega^{(1)}(0)=0\). Note that even if \(V^{(1)}(\phi_{0})=0\) and \(\omega^{(1)}(\phi_{0})=0\) for an arbitrary constant \(\phi_{0}\), we can always make \(\phi_{0}=0\) after a suitable shift of \(\phi\). There is the Schwarzschild-(A)dS solution with the trivial scalar field
\[f(r)=h(r)=1-\frac{r_{g}}{3r}\left(3-r_{g}^{2}V(0)\right)-\frac{V(0)}{3}r^{2}, \qquad\psi(r)=0. \tag{52}\]
Since \(r_{g}\) is only the integration constant, using Eq. (26), \(\delta f=\frac{\partial f}{\partial r_{g}}\delta r_{g}\) and \(\delta h=\frac{\partial h}{\partial r_{g}}\delta r_{g}\). Evaluating Eq. (28) with use of Eq. (27), we obtain the first law of BH thermodynamics
\[T_{\rm H(H)}\delta S_{\rm H} = \delta M_{\rm H}=\frac{1}{2G}\left(1-r_{g}^{2}V(0)\right)\delta r _{g}, \tag{53}\]
where \(\omega(0)=1/(16\pi G)\) with \(G\) being the gravitational constant, and the Hawking temperature (29) is given by \(T_{\rm H(H)}=T_{0}(1-r_{g}^{2}V(0))\). Thus, \(\delta S_{\rm H}=\frac{2\pi r_{g}}{G}\delta r_{g}=\frac{1}{4G}\delta A_{H}\),where \(A_{H}=4\pi r_{g}^{2}\) is the area of the BH event horizon, and hence by integrating it we recover the area law (47). The mass of the system is given by
\[M_{\rm H}=M_{0}\left(1-\frac{1}{3}V(0)r_{g}^{2}\right), \tag{54}\]
which coincides with the mass of the BH.
### The Einstein scalar-Gauss-Bonnet theory
As one of the nontrivial examples, we consider the Einstein-scalar-GB (EsGB) theory
\[{\cal L}=\frac{1}{16\pi G}R+\eta X+k(\phi)\left(R^{2}-4R^{\alpha\beta}R_{ \alpha\beta}+R^{\alpha\beta\mu\nu}R_{\alpha\beta\mu\nu}\right), \tag{55}\]
which is equivalent to the class of the Horndeski theories with
\[G_{2} = \eta X+8k^{(4)}(\phi)X^{2}\left(3-\ln X\right),\quad G_{3}=4k^{( 3)}(\phi)X\left(7-3\ln X\right),\] \[G_{4} = \frac{1}{16\pi G}+4k^{(2)}(\phi)X(2-\ln X),\quad G_{5}=-4k^{(1)}( \phi)\ln X, \tag{56}\]
where \(k(\phi)\) is the coupling function, and \(k^{(n)}(\phi)\) denotes the \(n\)\((=1,2,\cdots)\)-th order derivative of \(k(\phi)\) with respect to \(\phi\). This theory has been applied, for instance, to the models of spontaneous scalarization of BHs [53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68]. Eq. (32) reduces to
\[-\delta\left(r^{2}\sqrt{\frac{h}{f}}K^{[tr]}_{(\xi)}\right)-r^{2} \sqrt{\frac{h}{f}}J^{[t}\xi^{r]} \tag{57}\] \[= r^{2}\sqrt{\frac{h}{f}}\Big{\{}-\frac{r+32\pi G(1-3f)k^{(1)}(\psi )\psi^{\prime}}{8\pi Gr^{2}}\delta f\] \[-\frac{f}{r^{2}h}\left[4(f-1)h^{\prime}k^{(1)}(\psi)+h\psi^{ \prime}\left(r^{2}\eta-8(f-1)k^{(2)}(\psi)\right)\right]\delta\psi+\frac{8f(f -1)}{r^{2}}k^{(1)}(\psi)\delta\psi^{\prime}\Big{\}}.\]
In the case that the scalar field is regular at the event horizon \(r=r_{g}\) and the solutions can be expanded in the vicinity of \(r=r_{g}\) as
\[h(r) = h_{1}(r_{g})\left(r-r_{g}\right)+h_{2}(r_{g})\left(r-r_{g}\right) ^{2}+{\cal O}\left((r-r_{g})^{3}\right), \tag{58}\] \[f(r) = f_{1}(r_{g})\left(r-r_{g}\right)+f_{2}(r_{g})\left(r-r_{g}\right) ^{2}+{\cal O}\left((r-r_{g})^{3}\right),\] (59) \[\psi(r) = \psi_{H}(r_{g})+\psi_{1}(r_{g})\left(r-r_{g}\right)+\psi_{2}(r_{g} )\left(r-r_{g}\right)^{2}+{\cal O}\left((r-r_{g})^{3}\right), \tag{60}\]
where the coefficients \(h_{i}(r_{g})\), \(f_{i}(r_{g})\), and \(\psi_{i}(r_{g})\) (\(i=1,2,3\cdots\)) are in general functions of \(r_{g}\), and \(\psi_{H}(r_{g})\) represents the amplitude at the horizon which is also a function of \(r_{g}\), on the horizon \(r=r_{g}\) Eq. (57) reduces to
\[\left(-\delta\left(r^{2}\sqrt{\frac{h}{f}}K_{(\xi)}^{[tr]}\right) -r^{2}\sqrt{\frac{h}{f}}J^{[t}\xi^{r]}\right)_{r\to r_{g}}=\frac{\sqrt{f_{1}(r _{g})h_{1}(r_{g})}}{8\pi G}\left(r_{g}+32\pi Gk^{(1)}[\psi_{H}(r_{g})]\frac{ \partial\psi_{H}(r_{g})}{\partial r_{g}}\right)\delta r_{g}. \tag{61}\]
Since the Hawking temperature (29) is given by \(T_{\rm H(H)}=\frac{\sqrt{h^{\prime}(r_{g})f^{\prime}(r_{g})}}{4\pi}=\frac{ \sqrt{h_{1}(r_{g})f_{1}(r_{g})}}{4\pi}\), the differential of the BH entropy is given by
\[T_{\rm H(H)}\delta S_{\rm H} \tag{62}\] \[= \int d\Omega\left(-\delta\left(r^{2}\sqrt{\frac{h}{f}}K_{(\xi)}^ {[tr]}\right)-r^{2}\sqrt{\frac{h}{f}}J^{[t}\xi^{r]}\right)_{r\to r_{g}}= \frac{\sqrt{f_{1}(r_{g})h_{1}(r_{g})}}{2G}\left(r_{g}+32\pi Gk^{(1)}[\psi_{H}( r_{g})]\frac{\partial\psi_{H}(r_{g})}{\partial r_{g}}\right)\delta r_{g},\]
and hence
\[\delta S_{\rm H}=\frac{2\pi}{G}\left(r_{g}+32\pi Gk^{(1)}[\psi_{H}(r_{g})] \frac{\partial\psi_{H}(r_{g})}{\partial r_{g}}\right)\delta r_{g}. \tag{63}\]
By integrating it, we obtain the BH entropy
\[S_{\rm H}=\frac{\pi}{G}\left(r_{g}^{2}+64\pi Gk[\psi_{H}(r_{g})]\right)=S_{0} \left(1+\frac{64\pi G}{r_{g}^{2}}k[\psi_{H}(r_{g})]\right), \tag{64}\]
which agrees with the result by applying standard Wald entropy formula (see e.g., Refs. [63; 54]). Here, we fix a constant in integration such that \(\lim_{r_{g}\to 0}k[\psi_{H}(r_{g})]=0\). We should note that there is an ambiguity in the definition of the coupling function \(k(\phi)\) by adding an arbitrary constant. We can use this freedom to satisfy the above condition.
Thus, the thermodynamic properties of scalarized BHs also remain the same as those argued in the literature [63; 54]. We emphasize that although the actions (55) and (56) are equivalent up to the difference in total derivative terms, the dependence of the actions on the spacetime curvature appears to be different. Nevertheless, the results here indicate that even though the higher-derivative interactions of the scalar field and the nonminimal derivative couplings to the spacetime curvature are present in a description of the theory, by following the original approach of Iyer and Wald and computing the Noether charge potential associated with the diffeomorphism invariance, we could reproduce the results independent of the apparent difference in the action by the total derivative terms. We would like to emphasize that not only in the case of the EsGB theories but also in the case of other classes of the Horndeski theories, we should have to obtain the same value of the BH entropy from the two different descriptions of the same theory, whose actions differ by the total derivative terms. For instance, we have explicitly confirmed that the coincident entropy of a BH solution can be obtained in the two different descriptions of the same class of the Horndeski theories given by \((G_{4},G_{5})=\left(\frac{1}{16\pi G}+c^{\prime}X,0\right)\) and \((G_{4},G_{5})=\left(\frac{1}{16\pi G},-c^{\prime}\phi\right)\) with \(c^{\prime}\) being constant, which are equivalent to each other up to the total derivative terms and also equivalent to the scalar-tensor model with the nonminimal derivative coupling to the Einstein tensor \(c^{\prime}G^{\mu\nu}\phi_{\mu}\phi_{\nu}\).
Moreover, since the original action (55) does not include the higher-derivative interactions and the nonminimal derivative couplings of the scalar field to the spacetime curvature, this description may be regarded as the'minimal' one. Thus, in general we expect that the thermodynamic properties obtained by applying the standard Wald entropy formula to a'minimal' description could be obtained from an equivalent nonminimal description of the same theory including the higher-derivative interactions of the scalar field and/or the nonminimal derivative couplings to the spacetime curvature, by applying the general scheme employed in this work originally developed by Iyer and Wald.
#### iv.2.1 non-shift-symmetric EsGB theory
By solving the set of the equations of motion near the horizon \(r=r_{g}\), \(\psi_{1}\) in Eq. (60) can be found as [50; 51; 52]
\[\psi_{1}=\frac{1}{64\pi Gr_{g}k^{(1)}\left[\psi_{H}(r_{g})\right]}\left[-r_{g} ^{2}+\sqrt{r_{g}^{4}-\frac{1536\pi Gk^{(1)}\left[\psi_{H}(r_{g})\right]^{2}}{ \eta}}\right], \tag{65}\]
where we choose the branch which recovers the Schwarzschild solution in the limit of \(k^{(1)}\left[\psi_{H}(r_{g})\right]\to 0\). Thus, in order for a nontrivial BH solution to exist, we have to impose
\[r_{g}^{4}\geq\frac{1536\pi Gk^{(1)}\left[\psi_{H}(r_{g})\right]^{2}}{\eta}. \tag{66}\]
Let us consider the limit of the absence of the BH horizon \(r_{g}\to 0\). Assuming the regularity of \(\frac{\partial\psi_{H}(r_{g})}{\partial r_{g}}\) in the limit of \(r_{g}\to 0\), i.e., \(\psi_{H}(r_{g})\) does not blow up as \(r_{g}\to 0\), in the same limit the second term in the differential (63) vanishes faster than the first term. Hence, we obtain the vanishing entropy as the usual area law, by choosing the integration constant so that \(S_{\rm H}\to 0\) in the limit of \(r_{g}\to 0\).
In the large distance regions \(r\rightarrow\infty\), the general vacuum solution in the EsGB theory (55) can be expanded as
\[h(r) = 1-\frac{2\mathcal{M}(r_{g})}{r}+\frac{4\pi G\eta\mathcal{M}(r_{ g})\mathcal{Q}(r_{g})^{2}}{3r^{3}}+\mathcal{O}\left(\frac{1}{r^{4}}\right), \tag{67}\] \[f(r) = 1-\frac{2\mathcal{M}(r_{g})}{r}+\frac{4\pi G\mathcal{Q}(r_{g})^ {2}}{r^{2}}+\frac{4\pi G\eta\mathcal{M}(r_{g})\mathcal{Q}(r_{g})^{2}}{r^{3}}+ \mathcal{O}\left(\frac{1}{r^{4}}\right),\] (68) \[\psi(r) = \psi_{\infty}(r_{g})+\frac{\mathcal{Q}(r_{g})}{r}+\frac{\mathcal{ M}(r_{g})\mathcal{Q}(r_{g})}{r^{2}}-\frac{\mathcal{Q}(r_{g})\left[-4\mathcal{M}(r_{ g})^{2}+2\pi G\eta\mathcal{Q}(r_{g})^{2}\right]}{3r^{3}}+\mathcal{O}\left(\frac{1}{r^{4}} \right), \tag{69}\]
where we assume that the asymptotic amplitude \(\psi_{\infty}(r_{g})\), the Arnowitt-Deser-Misner (ADM) mass \(\mathcal{M}(r_{g})\), and the scalar charge \(\mathcal{Q}(r_{g})\) are the pure functions of the horizon radius \(r_{g}\). From Eq. (57), we obtain the differential of the energy
\[\delta M=\int d\Omega\left(-\delta\left(r^{2}\sqrt{\frac{h}{f}}K_{(\xi)}^{[tr ]}\right)-r^{2}\sqrt{\frac{h}{f}}J^{[t}\xi^{r]}\right)_{r\rightarrow\infty}= \frac{1}{G}\left(\mathcal{M}^{\prime}(r_{g})+4\pi G\eta\mathcal{Q}(r_{g})\psi _{\infty}^{\prime}(r_{g})\right)\delta r_{g}. \tag{70}\]
In addition to a non-trivial hairy BH solution, we find a trivial Schwarzschild BH solution with \(\phi=\phi_{0}\) (constant) when the coupling function \(k(\phi)\) in the EsGB theory allows the existence of \(\phi_{0}\) such that \(k^{(1)}(\phi_{0})=0\). On the other hand, if \(k^{(1)}(\phi)\neq 0\) for any values of the scalar field \(\phi\), a trivial Schwarzschild spacetime is no longer solution in EsGB theory. We find only a non-trivial hairy BH solution.
#### iv.3.2 shift-symmetric EsGB theory
In the shift-symmetric EsGB theories \(k(\phi)=\alpha\phi\), where \(\alpha\) is the constant, Eq. (64) reduces to
\[S_{\rm H}=\frac{\pi}{G}\left(r_{g}^{2}+64\pi G\alpha\psi_{H}\right)=S_{0} \left(1+\frac{64\pi\alpha G}{r_{g}^{2}}\psi_{H}\right). \tag{71}\]
Although even in the shift-symmetric theories the general BH solution could be expanded in the vicinity of the event horizon \(r=r_{g}\) as Eqs. (58)-(60), \(\psi_{H}\) does not have any physical meaning and hence is not a solution of \(r_{g}\). Thus, we may set the second term in Eq. (71) to zero, by requiring that \(S_{\rm H}\to 0\) in the limit of \(r_{g}\to 0\). We then recover the area law \(S_{\rm H}=S_{0}\) given by Eq. (47). We note that in the shift-symmetric 4D scalar-tensor Einstein-GB theories, recently it was argued that the BH entropy is also given by the area law [83].
In the large distance regions \(r\rightarrow\infty\), in the expansion (67)-(69), \(\psi_{\infty}\) also has no physical dependence on \(r_{g}\) in the shift-symmetric theories, and hence Eq. (70) reduces to \(\delta M=\frac{\mathcal{M}^{\prime}(r_{g})}{G}\delta r_{g}\). By integrating this, \(M=\frac{\mathcal{M}(r_{g})}{G}\), namely the thermodynamic energy coincides with the ADM mass.
### The irrational coupling model
Finally, we consider the irrational coupling model
\[G_{2} = \eta X-\frac{\Lambda}{8\pi G},\qquad G_{4}=\frac{1}{16\pi G}+ \alpha(-X)^{\frac{1}{2}},\qquad G_{3}=G_{5}=0, \tag{72}\]
Eq. (32) reduces to
\[-\delta\left(r^{2}\sqrt{\frac{h}{f}}K^{[tr]}_{(\xi)}\right)-r^{2}\sqrt{\frac{h}{f} }J^{[t}\xi^{r]}\ =\ r^{2}\sqrt{\frac{h}{f}}\Big{\{}-\frac{1}{8\pi Gr}\delta f+\mathcal{J}^{r} \delta\psi\Big{\}}. \tag{73}\]
Requiring that the radial component of the Noether current associated with the shift symmetry vanishes,
\[\mathcal{J}^{r}=-\eta f(r)\psi^{\prime}(r)+\frac{\sqrt{2f(r)}\alpha}{r^{2}}=0, \tag{74}\]
the term proportional to \(\delta\psi\) in Eq. (73) vanishes, and hence Eq. (73) reduces to \(-\delta\left(r^{2}\sqrt{\frac{h}{f}}K^{[tr]}_{(\xi)}\right)-r^{2}\sqrt{\frac{h }{f}}J^{[t}\xi^{r]}=-\frac{r}{8\pi G}\delta f\). As the vacuum solution satisfying Eq. (74), there exists the exact BH solution [49]
\[f(r)=h(r)=1-\frac{8\pi G\alpha^{2}}{r^{2}\eta}-\frac{\Lambda}{3}r^{2}-\frac{1 }{r}\left(r_{g}-\frac{8\pi G\alpha^{2}}{\eta r_{g}}-\frac{\Lambda r_{g}^{3}}{ 3}\right),\qquad\psi^{\prime}(r)=\frac{\sqrt{2}\alpha}{r^{2}\eta\sqrt{f(r)}}. \tag{75}\]
Following the similar steps, evaluating Eq. (28) with use of Eq. (27), we obtain the first law of BH thermodynamics
\[T_{\rm H(H)}\delta S_{\rm H}=\delta M_{\rm H}=\frac{\delta r_{g}}{2G}\left(1-r _{g}^{2}\Lambda+\frac{8\pi G\alpha^{2}}{\eta r_{g}^{2}}\right), \tag{76}\]
where the Hawking temperature (29) of the Horndeski BHs is given by
\[T_{\rm H(H)}=T_{0}\left(1-r_{g}^{2}\Lambda+\frac{8\pi G\alpha^{2}}{\eta r_{g}^ {2}}\right). \tag{77}\]
By integrating \(\delta S_{\rm H}=2\pi\delta r_{g}/G\), we obtain the area law Eq. (47), where we set the integration constant so that we have \(S_{\rm H}\to 0\) in the limit of the vanishing horizon radius \(r_{g}\to 0\). The mass of the system is given by
\[M_{\rm H}=M_{0}\left(1-\frac{8\pi G\alpha^{2}}{\eta r_{g}^{2}}-\frac{\Lambda r _{g}^{2}}{3}\right), \tag{78}\]
which coincides with the total mass of the BH. We note that the contribution of the scalar field to the ADM mass, which corresponds to the second term in Eq. (78), is always negative, as long as the kinetic term of the scalar field has the correct sign \(\eta>0\). This indicates the onset of the ghost instability, which has been observed in the linear stability analysis of the solution (75) performed in Ref. [72].
Since the Schwarzschild -(A)dS metric with the trivial scalar field is not a solution in the theory (72), there is no other counterpart to compare thermodynamic quantities.
## IV Black holes with linearly time-dependent scalar field
### Shift- and reflection-symmetric theories without cosmological constant
We first focus on subclass of the shift- and reflection-symmetric Horndeski theories, which is invariant under the transformations \(\phi\rightarrow\phi+c\) with \(c\) being constant and \(\phi\rightarrow-\phi\), and explicitly given by
\[G_{2}=G_{2}(X),\qquad G_{4}=G_{4}(X),\qquad G_{3}=G_{5}=0. \tag{79}\]
We assume the static and spherically symmetric spacetime (24) and the linearly time-dependent scalar field (35). In this case, Eq. (39) reduces to
\[-\delta\left(r^{2}\sqrt{\frac{h}{f}}K^{[tr]}_{(\xi)}\right)-r^{2 }\sqrt{\frac{h}{f}}J^{[t}\xi^{r]} = r^{2}\sqrt{\frac{h}{f}}\Big{\{}-\frac{2\delta f}{r}G_{4}-\frac{ 4f\psi^{\prime}}{r}\left(f\delta\psi^{\prime}+\psi^{\prime}\delta f\right)G_ {4X} \tag{80}\] \[+\frac{2f^{2}\psi^{\prime 2}}{rh^{2}}\left\{q^{2}\delta h+h^{2} \left(2f\psi^{\prime}\delta\psi^{\prime}+\psi^{\prime 2}\delta f\right)\right\}G_{4XX} \Big{\}}.\]
We focus on the stealth Schwarzschild solution [38; 39], given by
\[f=h=1-\frac{r_{g}}{r},\qquad X=\frac{q^{2}}{2},\qquad\psi(r)=2q\sqrt{r_{g}}\left[ \sqrt{r}-\sqrt{r_{g}}\mathrm{arctanh}\left(\sqrt{\frac{r_{g}}{r}}\right)\right], \tag{81}\]
which exists under the conditions
\[G_{2}\left(\frac{q^{2}}{2}\right)=G_{2X}\left(\frac{q^{2}}{2}\right)=0. \tag{82}\]
Evaluating Eq. (28) with use of Eq. (27), we obtain the first law of thermodynamics
\[T_{\mathsf{H}(\mathrm{H})}\delta S_{\mathrm{H}}=\delta M_{\mathrm{H}}=8\pi \left(G_{4}\left(\frac{q^{2}}{2}\right)-q^{2}G_{4X}\left(\frac{q^{2}}{2}\right) \right)\delta r_{g}. \tag{83}\]
Since the Hawking temperature (29) is given by \(T_{\mathsf{H}(\mathrm{H})}=T_{0}=\frac{1}{4\pi r_{g}}\), by integrating \(\delta S_{\mathrm{H}}\) with respect to \(r_{g}\), we obtain the BH entropy
\[S_{\mathrm{H}}=16\pi GS_{0}\left(G_{4}\left(\frac{q^{2}}{2}\right)-q^{2}G_{4X }\left(\frac{q^{2}}{2}\right)\right) \tag{84}\]
where \(S_{0}\) is defined in Eq. (46), and we set the integration constant so that we have the vanishing BH entropy \(S_{\mathrm{H}}\to 0\) in the limit of the vanishing horizon radius \(r_{g}\to 0\). On the other hand, by integrating \(\delta M_{\mathrm{H}}\) with respect to \(r_{g}\), we obtain the mass of the system
\[M_{\mathrm{H}}=16\pi GM_{0}\left(G_{4}\left(\frac{q^{2}}{2}\right)-q^{2}G_{4X }\left(\frac{q^{2}}{2}\right)\right). \tag{85}\]
where \(M_{0}\) is defined in Eq. (45), and we set the integration constant so that we have the vanishing mass \(M_{\mathrm{H}}\to 0\) in the limit of the vanishing horizon radius \(r_{g}\to 0\).
For a more explicit comparison, we consider the specific model [38],
\[G_{2}(X)=0,\qquad G_{4}(X)=\frac{1}{16\pi G}+\beta X, \tag{86}\]
which trivially satisfies the conditions (82). The BH entropy and total mass of the system of the stealth Schwarzschild solutions, Eqs. (84) and (85) respectively, reduce to
\[S_{\mathrm{H}}=S_{0}\left(1-8\pi Gq^{2}\beta\right),\qquad M_{\mathrm{H}}=M_{ 0}\left(1-8\pi Gq^{2}\beta\right). \tag{87}\]
In the same theory (86), there is also the GR Schwarzschild solution with the trivial scalar field, with the BH entropy and mass
\[S_{\mathrm{GR}}=S_{0},\qquad M_{\mathrm{GR}}=M_{0}, \tag{88}\]
respectively. We will discuss the thermodynamic properties of the stealth Schwarzschild solutions in Sec. V.1.
### Shift- and reflection-symmetric theories with cosmological constant (\(\Lambda\neq 0\))
We focus on the specific shift- and reflection-symmetric subclass of the Horndeski theories (79) such that
\[G_{2}(X)=\eta X-\frac{\Lambda}{8\pi G},\qquad G_{4}(X)=\frac{1}{16\pi G}+ \beta X, \tag{89}\]
where \(\Lambda\) is the cosmological constant and \(\beta\) is the coupling constant. We assume that the static and spherically symmetric spacetime (24) and the linearly time-dependent scalar field (35). Then, Eq. (39) reduces to
\[-\delta\left(r^{2}\sqrt{\frac{h}{f}}K^{[tr]}_{(\xi)}\right)-r^{2}\sqrt{\frac{ h}{f}}J^{[t}\xi^{r]} = r^{2}\sqrt{\frac{h}{f}}\frac{1}{r^{2}h}\Big{\{}-4r\beta f^{2}h\psi^{\prime} \delta\psi^{\prime}-r\left[q^{2}\beta+h\left(\frac{1}{8\pi G}+3\beta f\psi^{ \prime 2}\right)\right]\delta f\Big{\}}. \tag{90}\]
There exist the Schwarzschild-(A)dS solutions
\[f(r)=h(r)=1-\frac{\bar{\Lambda}}{3}r^{2}-\frac{r_{g}}{r}\left(1-\frac{\bar{ \Lambda}}{3}r_{g}^{2}\right),\qquad\psi^{\prime}(r)=q\frac{\sqrt{1-h(r)}}{\sqrt {f(r)h(r)}}, \tag{91}\]
with the conditions
\[\bar{\Lambda}:=-\frac{\eta}{2\beta},\qquad q=\sqrt{\frac{\eta+2\beta\Lambda}{1 6\pi G\beta\eta}}. \tag{92}\]
Thus, for \(\beta>0\) (\(\beta<0\)), we obtain the Schwarzschild-AdS (dS) solutions.
Assuming that \(\eta>0\) and \(G>0\), for non-negativity inside the square root of \(q\), we require
\[\Lambda\geq\bar{\Lambda}. \tag{93}\]
Evaluating Eq. (28) with use of Eq. (27), we obtain the first law of BH thermodynamics
\[T_{\text{H(H)}}\delta S_{\text{H}}=\delta M_{\text{H}}=-\frac{(2\beta+r_{g}^{2 }\eta)(-\eta+2\beta\Lambda)}{8G\beta\eta}\delta r_{g}, \tag{94}\]
the Hawking temperature (29) is given by
\[T_{\text{H(H)}}=\frac{2\beta+r_{g}^{2}\eta}{8\pi r_{g}\beta}=T_{0}\left(1+ \frac{\eta}{2\beta}r_{g}^{2}\right). \tag{95}\]
Thus, the BH entropy for the Schwarzschild-(A)dS solutions in the Horndeski theory is given by
\[S_{\text{H}}=\frac{\pi r_{g}^{2}}{2\eta G}\left(\eta-2\beta\Lambda\right)= \frac{S_{0}}{2}\left(1-\frac{2\beta}{\eta}\Lambda\right), \tag{96}\]
where we set the integration constant so that we have \(S_{\text{H}}\to 0\) in the limit of the vanishing horizon radius \(r_{g}\to 0\). In this theory (89), there is also the GR Schwarzschild-(A)dS solution,
\[f_{\text{GR}}(r)=h_{\text{GR}}(r)=1-\frac{\Lambda}{3}r^{2}-\frac{r_{g}}{r} \left(1-\frac{\Lambda}{3}r_{g}^{2}\right),\qquad\psi^{\prime}(r)=0, \tag{97}\]
irrespective of \(\beta\) and \(\eta\), the BH entropy is given by \(S_{\text{GR}}=S_{0}\). In the limit of \(\Lambda=\bar{\Lambda}\),where \(q=0\) and the scalar field is trivial, we recover the Schwarzschild-(A)dS solutions in GR and obtain the area law (47).
On the other hand, the mass of the system is given by
\[M_{\text{H}}=\frac{r_{g}\left(\eta-2\beta\Lambda\right)\left(r_{g}^{2}\eta+6 \beta\right)}{24G\beta\eta}=\frac{M_{0}}{2}\left(1-\frac{2\beta}{\eta}\Lambda \right)\left(1+\frac{\eta}{6\beta}r_{g}^{2}\right), \tag{98}\]
where we set the integration constant so that we have \(M_{\text{H}}\to 0\) in the limit of the vanishing horizon radius \(r_{g}\to 0\), which disagrees with the total mass of the stealth BH given by
\[M_{\text{BH}}=\frac{r_{g}}{12\beta G}\left(r_{g}^{2}\eta+6\beta\right)=M_{0} \left(1-\frac{\bar{\Lambda}}{3}r_{g}^{2}\right), \tag{99}\]
except for \(\Lambda=\bar{\Lambda}\). For the GR Schwarzschild-(A)dS BHs (97), we obtain \(M_{\text{GR}}=M_{0}\left(1-\frac{\Lambda}{3}r_{g}^{2}\right)\). We will discuss the thermodynamic properties of the Schwarzschild-(A)dS solutions in Sec. V.2.
### Shift-symmetric theories with the coincident speeds of GWs with the speed of light
Finally, we focus on the subclass of the shift-symmetric Horndeski theories satisfying the requirement that the propagation speed of GWs is equal to the speed of light, i.e., \(c_{\text{sw}}=c\), whose Lagrangian density is given by
\[\mathcal{L}=\frac{1}{16\pi G}R+G_{2}(X)-G_{3}(X)\square\phi. \tag{100}\]
We assume the static and spherically symmetric spacetime (24) and the linearly time-dependent scalar field (35). In this case, Eq. (39) reduces to
\[-\delta\left(r^{2}\sqrt{\frac{h}{f}}K^{[tr]}_{(\xi)}\right)-r^{2} \sqrt{\frac{h}{f}}J^{[t}\xi^{r]} = r^{2}\sqrt{\frac{h}{f}}\Big{\{}\Big{(}-\frac{1}{8\pi Gr}-\frac{f}{2}G_{3X} \psi^{\prime 3}\Big{)}\delta f-\frac{q^{2}f\psi^{\prime}}{2h^{2}}G_{3X}\delta h -f^{2}G_{3X}\psi^{\prime 2}\delta\psi^{\prime}\Big{\}}. \tag{101}\]
We focus on the stealth Schwarzschild solution (81) which exists under the conditions
\[G_{2}\left(\frac{q^{2}}{2}\right)=G_{2X}\left(\frac{q^{2}}{2} \right)=G_{3X}\left(\frac{q^{2}}{2}\right)=0, \tag{102}\]
where Eq. (101) further reduces to
\[-\delta\left(r^{2}\sqrt{\frac{h}{f}}K^{[tr]}_{(\xi)}\right)-r^{2 }\sqrt{\frac{h}{f}}J^{[t}\xi^{r]}=r^{2}\sqrt{\frac{h}{f}}\frac{1}{8\pi Gr} \delta f. \tag{103}\]
Evaluating Eq. (28) with use of Eq. (27), we obtain the first law of thermodynamics \(T_{\rm H(H)}\delta S_{\rm H}=\delta M_{\rm H}=\frac{dr_{g}}{2G}\), where \(T_{\rm H(H)}=T_{0}\) from Eq. (29). By integrating them, the mass and entropy are, respectively, given by
\[M_{\rm H}=M_{0},\qquad S_{\rm H}=S_{0}, \tag{104}\]
where we set the integration constant so that we have \(S_{\rm H}\to 0\) and \(M_{\rm H}\to 0\) in the limit of \(r_{g}\to 0\). Hence we conclude that both BH solutions are equally stable from the viewpoint of BH thermodynamics.
## V Thermodynamical instability of black holes with linearly time-dependent scalar field
As for application of the results presented in Sec.IV, we discuss thermodynamical instability by use of the BH entropy. When some gravitational theory contains two (or more) BH solutions, the comparison of the BH entropy will tell us which BH solution is thermodynamically favored. In GR, the uniqueness of the Kerr(-Newman) BH solution is not held when we include non-Abelian field and/or other fields. In this case, there exist hairy BH solutions such as colored BHs [84; 85; 86; 87]. We can then study their stability by a perturbation analysis whose result is consistent with the simple argument by thermodynamical analysis, that is, if the entropy of the first BH solution is smaller than that of the second BH solution, at least the first black hole is thermodynamically unstable [88; 89; 90; 91; 92].
In the present Horndeski theories, we may discuss thermodynamical instability when there exist two or more BH solutions. As we discussed in Sec.IV, there are BH solutions with linearly time-dependent scalar field. In this section, we discuss thermodynamical stability of those BHs. Especially, for the BH solutions discussed in Sec. IV.1 and Sec. IV.2, it is argued that perturbations around them are infinitely strongly coupled, and the linear perturbation theory could not be trusted at the arbitrary low energy scales [74; 75]. Thus, the stability of the stealth solution is unclear at the level of the linearized analysis. However, through the analysis presented in this subsection, we will mention thermodynamical instabilities of these solutions.
### Shift- and reflection-symmetric theories without cosmological constant (\(\Lambda=0\))
In this subsection, we consider the Horndeski theory given by Eq. (86). When we assume a linearly time-dependent scalar field, there exist two Schwarzschild solutions; one is GR Schwarzschild BH with the mass and entropy given by Eq. (88) and the other is the stealth Schwarzschild BH (the Horndeski Schwarzschild BH) with mass and entropy given by (87). When we compare these two entropy (\(S_{\rm GR}\,,S_{\rm H}\)) at the same mass value \(M_{\rm GR}=M_{\rm H}\), we can easily find
\[S_{\rm H}=\frac{S_{\rm GR}}{1-8\pi Gq^{2}\beta}. \tag{105}\]
Hence we obtain the following results:
\[\left\{\begin{array}{ll}S_{\rm H}&>S_{\rm GR}\qquad\mbox{when} \qquad\beta>0,\\ S_{\rm H}&<S_{\rm GR}\qquad\mbox{when}\qquad\beta<0.\end{array}\right. \tag{106}\]
As a result, we conclude that the Horndeski Schwarzschild BH is thermodynamically stable than the GR Schwarzschild BH when \(\beta>0\), while the result turns to be opposite if \(\beta<0\).
### Shift- and reflection-symmetric theories with cosmological constant (\(\Lambda\neq 0\))
Here we discuss the Horndeski theory given by Eq. (89). When we assume a linearly time-dependent scalar field, there exist the two Schwarzschild-(A)dS solutions; one is Schwarzschild-(A)dS solution with the cosmological constant \(\Lambda\), and the other is that with the effective cosmological constant \(\bar{\Lambda}=-\eta/2\beta\) given by Eq. (92). We have two different Schwarzschild-(A)dS solutions as discussed before, and summarize the thermodynamical variables for two BH solutions as follows:
* Schwarzschild-(A)dS solution with \(\Lambda\) (GR Schwarzschild-(A)dS BH) \[\mathrm{mass}:M_{\mathrm{GR}}=M_{0}\left(1-\frac{\Lambda r_{g}^{2}}{3}\right) \,,\,\,\,\,\mathrm{entropy}:S_{\mathrm{GR}}=S_{0}\,,\,\,\,\,\mathrm{ temperature}:T_{\mathsf{H}(\mathrm{GR})}=\left(1-\Lambda r_{g}^{2}\right)T_{0}\,,\] (107) where \(M_{0}\) and \(S_{0}\) are defined in Eqs. (45) and (46), and \(T_{0}:=\frac{1}{4\pi r_{g}}\) represents the Hawking temperature in the Schwarzschild background with the horizon radius \(r_{g}\).
* Schwarzschild-(A)dS solution with \(\bar{\Lambda}\) (Horndeski Schwarzschild-(A)dS BH) \[\mathrm{mass}:M_{\mathrm{H}}=M_{0}\left(1-\frac{\bar{\Lambda}r_{g}^{2}}{3} \right)\frac{\Lambda+\bar{\Lambda}}{2\bar{\Lambda}}\,,\,\,\,\mathrm{entropy }:S_{\mathrm{H}}=S_{0}\frac{\Lambda+\bar{\Lambda}}{2\bar{\Lambda}}\,,\,\,\, \mathrm{temperature}:T_{\mathsf{H}(\mathrm{H})}=\left(1-\bar{\Lambda}r_{g}^{2} \right)T_{0}\,.\] (108) Since \(\Lambda\geq\bar{\Lambda}\), we can classify the solutions into three cases: (1) \(\Lambda\geq\bar{\Lambda}>0\) (\(\beta<0\)), (2) \(\Lambda>0\,,\bar{\Lambda}<0\) (\(\beta>0\)), (3) \(0>\bar{\Lambda}\geq\bar{\Lambda}\) (\(\beta>0\)). In the case (1), two BH solutions are Schwarzschild-dS solutions, while in the case (3), we find two Schwarzschild-AdS solutions. We shall discuss their theomodynamical instabilities in below. For the case (2), since one is Schwarzschild-dS solution and the other is Schwarzschild-AdS solution, the boundary conditions are completely different. We may not expect any phase transition between them. We introduce the curvature radii \(\ell\) and \(\bar{\ell}\), which are defined by \(\ell=\sqrt{3/\epsilon_{\Lambda}\Lambda}\) and \(\bar{\ell}=\sqrt{3/\epsilon_{\bar{\Lambda}}\bar{\Lambda}}\), where \(\epsilon_{\Lambda}\) and \(\epsilon_{\bar{\Lambda}}\) are the signs of \(\Lambda\) and \(\bar{\Lambda}\), respectively.
#### v.2.1 \(\Lambda\geq\bar{\Lambda}>0\)
In this case, \(\ell\leq\bar{\ell}\), and the thermodynamical valuables are given by
\[M_{\mathrm{GR}}=M_{0}\left(1-\frac{r_{g}^{2}}{\ell^{2}}\right) \,,\,\,\,S_{\mathrm{GR}}=S_{0}\,,\,\,\,\,T_{\mathsf{H}(\mathrm{GR})}=\left(1- \frac{3r_{g}^{2}}{\ell^{2}}\right)T_{0}\,, \tag{109}\] \[M_{\mathrm{H}}=M_{0}\left(1-\frac{r_{g}^{2}}{\ell^{2}}\right) \frac{\ell^{2}+\bar{\ell}^{2}}{2\ell^{2}}\,,\,\,\,S_{\mathrm{H}}=S_{0}\frac{ \ell^{2}+\bar{\ell}^{2}}{2\ell^{2}}\,,\,\,\,T_{\mathsf{H}(\mathrm{H})}=\left(1 -\frac{3r_{g}^{2}}{\ell^{2}}\right)T_{0}\,. \tag{110}\]
In order to discuss thermodynamical stability, we plot the mass-entropy diagram, which is given in Fig. 1 for the case of \(\bar{\ell}/\ell=1.1\).
For a given mass \(M\), the entropy of the GR Schwarzschild-dS BH with \(\Lambda\) is always larger than that of the Horndeski Schwarzschild-dS BH with \(\bar{\Lambda}\). It means that the Horndeski Schwarzschild-dS BH is thermodynamically unstable than the GR Schwarzschild-dS BH. We expect that thermodynamical phase transition from the Horndeski Schwarzschild-dS BH to the GR Schwarzschild-dS BH. Since there exists a scalar field \(\phi\) outside the Horndeski Schwarzschild-dS BH, the scalar field propagates away to infinity when the transition occurs. If the entropy is conserved, the mass energy decreases by the emission of a scalar field. In general, we expect that the entropy increases as well as the mass energy decreases and the Horndeski Schwarzschild-dS BH transits to the GR Schwarzschild-dS BH in the left-up direction in the diagram.
#### v.2.2 \(\bar{\Lambda}\leq\Lambda<0\)
In this case, both BHs are described by the Schwarzschild-AdS solutions with \(\ell\geq\bar{\ell}\), and the thermodynamical valuables are given by
\[M_{\mathrm{GR}}=M_{0}\left(1+\frac{r_{g}^{2}}{\ell^{2}}\right) \,,\,\,\,S_{\mathrm{GR}}=S_{0}\,,\,\,\,\,T_{\mathsf{H}(\mathrm{GR})}=\left(1 +\frac{3r_{g}^{2}}{\ell^{2}}\right)T_{0}\,, \tag{111}\]
\[M_{\rm H}=M_{0}\left(1+\frac{r_{g}^{2}}{\ell^{2}}\right)\frac{\ell^{2}+\bar{\ell} ^{2}}{2\ell^{2}}\,,\ \ S_{\rm H}=S_{0}\frac{\ell^{2}+\bar{\ell}^{2}}{2\ell^{2}}\,,\ \ T_{\rm H(H)}=\left(1+\frac{3r_{g}^{2}}{\ell^{2}}\right)T_{0}\,. \tag{112}\]
We plot the mass-entropy diagram, which is given in Fig. 2 for the case of \(\bar{\ell}/\ell=0.9\) (a) and \(0.1\) (b), (c).
In this case, two curves \(S_{\rm GR}(M)\) and \(S_{\rm H}(M)\) intersect at some critical mass \(M_{\rm GR\text{-}H}\), beyond which \(S_{\rm GR}>S_{\rm H}\). In an asymptotically AdS spacetime, there exists another critical mass \(M_{\rm HP}\), below which Schwarzschild-AdS BH evaporates to thermal radiation in AdS space via the Hawking-Page transition [93]. In the present case, since there are two Schwarzschild-AdS BH solutions, we find two critical masses, \(M_{\rm HP(GR)}\) and \(M_{\rm HP(H)}\) corresponding to the GR Schwarzschild-AdS BH and the Horndeski Schwarzschild-AdS BH, respectively. We find that \(M_{\rm HP(GR)}>M_{\rm HP(H)}\). In the limit of \(\bar{\ell}\to\ell\), the critical horizon radius \(r_{g\rm GR\text{-}H}\) becomes \(\ell/\sqrt{5}\), which is smaller than the HP transition radius \(r_{g\rm HP(GR)}=\ell/\sqrt{3}\). We then find \(M_{\rm GR\text{-}H}<M_{\rm HP(GR)}\). As a result, we can classify into two cases; (1) \(M_{\rm HP(GR)}>M_{\rm HP(H)}>M_{\rm GR\text{-}H}\), and (2) \(M_{\rm HP(GR)}>M_{\rm GR\text{-}H}>M_{\rm HP(H)}\). If \(\mathfrak{r}_{\rm cr}<\bar{\ell}/\ell<1\), we find the case (1), while when \(0<\bar{\ell}/\ell<\mathfrak{r}_{\rm cr}\), we obtain the case (2). The critical value \(\mathfrak{r}_{\rm cr}\) is given by the root of the equation \(\mathfrak{r}_{\rm cr}^{6}+3\mathfrak{r}_{\rm cr}^{4}+16\mathfrak{r}_{\rm cr}^ {2}-4=0\), i.e. \(\mathfrak{r}_{\rm cr}\approx 0.48835\). We then find the following various evolution scenarios depending the coupling constants:
* Case (1) \(\mathfrak{r}_{\rm cr}<\bar{\ell}/\ell<1\)
Figure 1: The entropy of the GR Schwarzschild-dS BH (the red curve) and that of the Horndeski Schwarzschild-dS BH (the blue curve) in terms of the mass.
Figure 2: The entropy of the GR Schwarzschild-AdS BH (the red curve) and that of the Horndeski Schwarzschild-AdS BH (the blue curve) in terms of the mass ((a) is the case of \(\bar{\ell}/\ell=0.9\), while (b) is for \(\bar{\ell}/\ell=0.1\). (c) is the enlarged version of (b)). The dotted curves are thermal AdS phases via Hawking-Page transition, while the solid curves denote ”large” Schwarzschild-AdS BH phases.
Below \(M_{\text{GR-H}}\), we find only thermal radiation in AdS space, in which the effective cosmological constant is fixed by \(\bar{\Lambda}\), while in the range of \(M_{\text{GR-H}}<M<M_{\text{HP(H)}}\), it is also thermal radiation in AdS space but with the cosmological constant \(\Lambda\). In the range of \(M_{\text{HP(GR)}}>M>M_{\text{HP(H)}}\), the Horndeski Schwarzschild-AdS BH will evaporate via the Hawking-Page transition, finding thermal radiation in AdS space but with the cosmological constant \(\Lambda\). When \(M>M_{\text{HP(GR)}}\), the Horndeski Schwarzschild-AdS BH will evolve into the GR Schwarzschild-AdS BH via thermal phase transition.
* Case (2) \(0<\bar{\ell}/\ell<\mathfrak{r}_{\text{cr}}\) Below \(M_{\text{HP(H)}}\) we find only thermal radiation in AdS space, in which the effective cosmological constant is fixed by \(\bar{\Lambda}\) just as Case (1). In the range of \(M_{\text{HP(H)}}<M<M_{\text{GR-H}}\), we find the transition from thermal radiation in AdS space with \(\Lambda\) into the stable Horndeski Schwarzschild-AdS BH. In the range of \(M_{\text{HP(GR)}}>M>M_{\text{HP(H)}}\), the Horndeski Schwarzschild-AdS BH will evaporate into thermal radiation in AdS space with \(\Lambda\). When \(M>M_{\text{HP(GR)}}\), the Horndeski Schwarzschild-AdS BH will evolve into the GR Schwarzschild-AdS BH via thermal phase transition just as Case (1).
### Relation between the cases of \(\Lambda=0\) (Sec. V.1) and of \(\Lambda\neq 0\) (Sec.V.2)
In order to discuss the relation between Schwarzschild BH and Schwarzschild (A)dS BH discussed in the two previous subsections, we rewrite the mass and entropy by use of \(\beta\) and \(q\). From Eq. (92), we find
\[\frac{\bar{\ell}^{2}+\ell^{2}}{2\ell^{2}}=\frac{\bar{\Lambda}+ \Lambda}{2\bar{\Lambda}}=1-8\pi Gq^{2}\beta \tag{113}\]
Using this relation and Eqs. (109),(110), (111), and (112), we obtain the relation between the masses \(M_{\text{GR}}\) and \(M_{\text{H}}\) and that of the entropies \(S_{\text{GR}}\) and \(S_{\text{H}}\) as
\[M_{\text{GR}}=M_{0}\left(1\mp\frac{r_{g}^{2}}{\ell^{2}}\right) \,,\ \ S_{\text{GR}}=S_{0} \tag{114}\] \[M_{\text{H}}=M_{0}\left(1\mp\frac{r_{g}^{2}}{\ell^{2}}\right) \left(1-8\pi Gq^{2}\beta\right),\ \ S_{\text{H}}=S_{0}\left(1-8\pi Gq^{2}\beta\right)\,, \tag{115}\]
where \(\mp\) correspond to the Schwarzschild-dS BH and Schwarzschild-AdS BH, respectively. When we take the limit of \(\Lambda,\bar{\Lambda}\to 0\) (\(\ell,\bar{\ell}\rightarrow\infty\)), we find the same relation (87) of the masses and entropies of the stealth Schwarzschild BH and GR Schwarzschild BH.
As discussed in Sec. V.2, for Schwarzschild-AdS BH (\(\beta>0\)), \(S_{\text{H}}>S_{\text{GR}}\) in the small mass limit, while for the Schwarzschild-dS BH (\(\beta<0\)) the relation becomes opposite, which is consistent with thermodynamical instability of Schwarzschild BH discussed in Sec. V.1. We note that Schwarzschild BH does not show the Hawking-Page transition, and then BH with larger entropy becomes stable than the other.
## VI Black hole thermodynamics of charged black holes
BH solutions discussed so far contained only one independent charge, i.e., the mass of the BH, or equivalently the radius of the BH event horizon. In order to discuss BH thermodynamics with two or more independent charges, in this section we will focus on the Horndeski theories minimally coupled to the \(U(1)\)-invariant vector field. In such theories, BH solutions could contain at least two independent charges, the mass and electric (and/or magnetic) charges. Here, we will focus on the electrically charged BH solution as an extension of the earlier work [81].
We will extend the general formulation presented in Sec. II and derive the Noether change potential associated with the diffeomorphism invariance, including the contribution of the general \(U(1)\)-invariant vector field. We will then apply our formulation to static and spherically symmetric charged BH solutions and obtain the variations of Hamiltonian evaluated at the horizon and in the spatial infinity. As a concrete example of charged BH solutions, we will consider the extension of the irrational coupling model discussed in Sec. III.4 minimally coupled to the ordinary Maxwell field. We show that in this model the differential of the BH entropy is integrable and the first law of the BH thermodynamics is recovered.
We will then consider the general reflection- and shift-symmetric class of the Horndeski theories minimally coupled to the Maxwell field, and clarify the general conditions under which the differential of the BH entropy is integrable in the presence of the two independent charges.
### The Noether charge potential with the \(U(1)\)-invariant vector field
We consider the Horndeski theory minimally coupled to a \(U(1)\)-invariant vector field
\[S = \int d^{4}x\sqrt{-g}\mathcal{L}=\int d^{4}x\sqrt{-g}\left(\sum_{i=2}^{5} \mathcal{L}_{i}+G_{A}(\mathcal{F})\right), \tag{116}\]
where \(\mathcal{L}_{i}\) (\(i=2,3,4,5\)) are given given by (3)-(6), and \(G_{A}(\mathcal{F})\) is the general \(U(1)\)-invariant Lagrangian density for the vector field \(A_{\mu}\) given as the general function of
\[\mathcal{F}:=-\frac{1}{4}g^{\alpha\beta}g^{\mu\nu}F_{\alpha\mu}F_{\beta\nu}, \tag{117}\]
with \(F_{\mu\nu}:=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\) being the electromagnetic field strength. The variation of the action (116) is given by
\[\delta S=\int d^{4}x\sqrt{-g}\left(E_{\mu\nu}\delta g^{\mu\nu}+E_{\phi}\delta \phi+\nabla_{\nu}\left(G_{A,\mathcal{F}}F^{\nu}{}_{\mu}\right)\delta A^{\mu}+ \nabla_{\mu}J^{\mu}\right), \tag{118}\]
where the equation of motion of the vector field is given by \(\nabla_{\nu}\left(G_{A,\mathcal{F}}F^{\nu}{}_{\mu}\right)=0\) with \(G_{A,\mathcal{F}}:=\frac{\partial G_{A}}{\partial\mathcal{F}}\), and the boundary current is given by
\[J^{\mu} = \sum_{i=2}^{5}J^{\mu}_{(i)}+J^{\mu}_{A}, \tag{119}\]
which \(J^{\mu}_{(i)}\) (\(i=2,3,4,5\)) is given by Eq. (9)-(12), and
\[J^{\mu}_{A}:=-G_{A,\mathcal{F}}F^{\mu\nu}\delta A_{\nu}. \tag{120}\]
We also define the dual 3-form to \(J^{\mu}\) as Eq. (14)
Under the diffeomorphism transformation, \(x^{\mu}\to x^{\mu}+\xi^{\mu}(x^{\mu})\), the variations of the metric and scalar field are given by Eq. (15), and that of the vector field is given by \(\delta_{\xi}A_{\mu}=\xi^{\sigma}\nabla_{\sigma}A_{\mu}+A_{\sigma}\nabla_{\mu }A^{\sigma}\), respectively. Using the background equations of motion, under the diffeomorphism transformation, we obtain
\[J^{\mu}_{(\xi)}-\xi^{\mu}\mathcal{L}=2\nabla_{\nu}K^{[\nu\mu]}_{(\xi)}=2 \nabla_{\nu}\left(\sum_{i=2}^{5}K^{[\nu\mu]}_{(i)(\xi)}+K^{[\nu\mu]}_{A(\xi)} \right), \tag{121}\]
where each individual contribution from the Horndeski theories is given by Eqs. (17)-(20), and the Noether charge potential for the vector field is given by
\[K^{\mu\nu}_{(\xi)A} = \frac{1}{2}G_{A,\mathcal{F}}F^{\mu\nu}A_{\sigma}\xi^{\sigma}, \tag{122}\]
respectively. We then define the dual 2-form of the Noether charge potential \(K^{\mu\nu}_{(\xi)}\) as Eq. (21), and the 2-form tensor where the first index of \(\Theta_{\nu\alpha\beta}\) defined in Eq. (14) is contracted by the infinitesimal differmorphism transformation \(\xi^{\nu}\), by Eq. (22). We then consider the variation of the dual Noether charge potential with respect to the physical parameters subtracted by Eq. (22) as Eq. (23). The integration of Eq. (23) on the boundaries of the Cauchy surface gives rise to the variation of the Hamiltonian [76; 79].
As the background, we consider the static and spherically symmetric solutions whose metric is written by Eq. (24), where the functions \(h(r)\) and \(f(r)\) contain the common largest root at \(r=r_{g}>0\) corresponding to the position of the BH event horizon, and \(f(r)>0\) and \(h(r)>0\) for \(r>r_{g}\). For the scalar field, we focus on the static ansatz (31) for simplicity. For the \(U(1)\)-invariant vector field, we assume the following ansatz
\[A_{\mu}=\left(A_{0}(r),0,0,0\right), \tag{123}\]
which only gives rise to the electric field \(F_{rt}=A_{0}^{\prime}(r)\). We choose the gauge such that the value of \(A_{0}(r)\) vanishes on the horizon \(r=r_{g}\), i.e., \(A_{0}(r_{g})=0\). We assume that \(\xi^{\mu}\) corresponds to the timelike Killing vector field, \(\xi^{\mu}=(1,0,0,0)\).
The variations in terms of the integration constants can be written as Eq. (26) and \(\delta A_{0}(r)=\sum_{j}\frac{\partial A_{0}(r)}{\partial c_{j}}\delta c_{j}\), where \(c_{j}\)'s are integration constants of the BH solutions, which include the position of the event horizon \(r_{g}\) and the electric charge \(Q\). With use of Eq. (23), the variation of the Hamiltonian with respect to the integration constants is given by the contributions from the horizon \(r\to r_{g}\) and infinity \(r\to\infty\), \(\delta\mathcal{H}=\delta\mathcal{H}_{\infty}-\delta\mathcal{H}_{H}\), where \(\delta\mathcal{H}_{\infty}\) and \(\delta\mathcal{H}_{H}\) are given by Eq. (27). The conservation of the Hamiltonian \(\mathcal{H}=0\) yields
\[\delta\mathcal{H}_{\infty}=\delta\mathcal{H}_{H}. \tag{124}\]
### The Einstein-Maxwell theory
As the simplest example without the dynamical scalar field \(\phi=0\), we consider the Einstein-Maxwell theory with the cosmological constant \(\Lambda\)
\[G_{2}=-\frac{\Lambda}{8\pi G}\qquad G_{4}=\frac{1}{16\pi G},\qquad G_{3}=G_{5}=0,\qquad G_{A}=\mathcal{F}, \tag{125}\]
under which Eq. (32) reduces to
\[-\delta\left(r^{2}\sqrt{\frac{h}{f}}K^{[tr]}_{(\xi)}\right)-r^{2} \sqrt{\frac{h}{f}}J^{[t}\xi^{r]} \tag{126}\] \[= r^{2}\sqrt{\frac{h}{f}}\Big{\{}-\left[\frac{1}{8\pi Gr}+\frac{A _{0}(r)A_{0}^{\prime}(r)}{2h(r)}\right]\delta f(r)+\frac{f(r)A_{0}(r)A_{0}^{ \prime}(r)}{2h(r)^{2}}\delta h(r)-\frac{A_{0}(r)f(r)}{h(r)}\delta A_{0}^{\prime }(r)\Big{\}}.\]
In the theory (125), there exists the Reissner-Nordstrom-de Sitter solution
\[f(r)=h(r)\ =\ 1-\frac{\Lambda}{3}r^{2}+\frac{1}{r}\left(\frac{r_{g}}{3} \left(-3+r_{g}^{2}\Lambda\right)-\frac{4\pi GQ^{2}}{r_{g}}\right)+\frac{4\pi GQ ^{2}}{r^{2}},\quad A_{0}(r)=Q\left(\frac{1}{r}-\frac{1}{r_{g}}\right), \tag{127}\]
which satisfies the gauge condition \(A_{0}(r_{g})=0\).
The variation of the Hamiltonian on the horizon \(r=r_{g}\) yields
\[\delta\mathcal{H}_{H}=T_{\rm H(H)}\delta S_{\rm H}=\frac{\delta r_{g}}{2G} \left[1-r_{g}^{2}\Lambda-\frac{4\pi GQ^{2}}{r_{g}^{2}}\right], \tag{128}\]
where the Hawking temperature (29) is given by \(T_{\rm H(H)}=T_{0}\left(1-r_{g}^{2}\Lambda-\frac{4\pi GQ^{2}}{r_{g}^{2}}\right)\). Thus, we obtain the integrable relation \(\delta S_{\rm H}=\left(2\pi r_{g}/G\right)\delta r_{g}\), as the proportionality coefficient \(\left(2\pi r_{g}/G\right)\) does not depend on the electric charge \(Q\). As the consequence, we obtain the area law Eq. (47), where we set the integration constant so that we have \(S_{\rm H}\to 0\) in the limit of the vanishing horizon radius \(r_{g}\to 0\).
The variation of the Hamiltonian at the infinity \(r\rightarrow\infty\) yields
\[\delta\mathcal{H}_{\infty}=\frac{\partial M_{H}}{\partial r_{g}}\delta r_{g}= \delta M_{H}-\frac{\partial M_{H}}{\partial Q}\delta Q=\delta M_{H}-\Phi_{H} \delta Q, \tag{129}\]
where \(\Phi_{H}:=-4\pi\left(A_{0}(r\rightarrow\infty)-A_{0}(r_{g})\right)=-4\pi A_{0} (r\rightarrow\infty)\) describes the difference in the electric potential between the infinity \(r\rightarrow\infty\) and the horizon \(r=r_{g}\)2, and the mass of the total system is given by
Footnote 2: If we choose another gauge condition for \(A_{0}(r)\) such that \(A_{0}(r_{g})\neq 0\), the variation of the Hamiltonian at \(r=r_{g}\), \(\delta\mathcal{H}_{H}\), includes a term proportional to \(\delta Q\) as well as the right-hand side of Eq. (128). However, the same term proportional to \(\delta Q\) will also appear in the variation of the Hamiltonian in the limit of \(r\rightarrow\infty\), \(\delta\mathcal{H}_{\infty}\), as well as the terms in Eq. (129). Thus, in the conservation of the Hamiltonian Eq. (124) this gauge-dependent term proportional to \(\delta Q\) cancels, leading to the first law of BH thermodynamics as Eq. (131), as expected.
\[M_{\rm H}=M_{0}\left(1+\frac{4\pi GQ^{2}}{r_{g}^{2}}-\frac{\Lambda r_{g}^{2}}{3 }\right), \tag{130}\]
which coincides with the total ADM mass of the BH spacetime. The conservation of the Hamiltonian \(\mathcal{H}=0\), Eq. (124) yields the first law of the thermodynamics for the electrically charged BH
\[T_{\rm H(H)}\delta S_{\rm H}=\delta M_{H}-\Phi_{H}\delta Q. \tag{131}\]
### The irrational coupling model with the \(U(1)\)-invariant vector field
We then consider the irrational coupling model with the minimally coupled \(U(1)\)-invariant vector field
\[G_{2}=\eta X-\frac{\Lambda}{8\pi G},\qquad G_{4}=\frac{1}{16\pi G}+\alpha(-X)^{ \frac{1}{2}},\qquad G_{3}=G_{5}=0,\qquad G_{A}=\mathcal{F}, \tag{132}\]
under which Eq. (32) reduces to
\[-\delta\left(r^{2}\sqrt{\frac{h}{f}}K^{[tr]}_{(\xi)}\right)-r^{2} \sqrt{\frac{h}{f}}J^{[t}\xi^{r]} \tag{133}\] \[= r^{2}\sqrt{\frac{h}{f}}\Big{\{}-\left[\frac{1}{8\pi Gr}+\frac{A_ {0}(r)A_{0}^{\prime}(r)}{2h(r)}\right]\delta f(r)+\frac{f(r)A_{0}(r)A_{0}^{ \prime}(r)}{2h(r)^{2}}\delta h(r)-\frac{f(r)A_{0}(r)}{h(r)}\delta A_{0}^{\prime }(r)+\mathcal{J}^{r}\delta\psi(r)\Big{\}},\]
where the radial component of the Noether current associated with the shift symmetry \(\mathcal{J}^{r}\) is given by Eq. (74). We note that the model (132) corresponds to an extension of Eq. (72) with the Maxwell field, discussed in Sec. III.4. There exists the exact BH solution with the electric charge \(Q\)
\[f(r)=h(r) = 1-\frac{\Lambda}{3}r^{2}+\frac{1}{r}\left(\frac{r_{g}}{3}\left( -3+r_{g}^{2}\Lambda\right)-\frac{4\pi G}{r_{g}}\left(Q^{2}-\frac{2\alpha^{2}} {\eta}\right)\right)+\frac{4\pi G}{r^{2}}\left(Q^{2}-\frac{2\alpha^{2}}{\eta} \right),\] \[\psi^{\prime}(r) = \frac{\sqrt{2}\alpha}{r^{2}\eta\sqrt{f(r)}},\qquad A_{0}(r)=Q \left(\frac{1}{r}-\frac{1}{r_{g}}\right), \tag{134}\]
which satisfies \(A_{0}(r_{g})=0\).
The variation of the Hamiltonian on the horizon \(r=r_{g}\) yields
\[\delta\mathcal{H}_{H}=T_{\rm H(H)}\delta S_{\rm H}=\frac{\delta r_{g}}{2G} \left[1-r_{g}^{2}\Lambda-\frac{4\pi G}{r_{g}^{2}}\left(Q^{2}-\frac{2\alpha^{2 }}{\eta}\right)\right], \tag{135}\]
where the Hawking temperature (29) is given by \(T_{\rm H(H)}=T_{0}\left(1-r_{g}^{2}\Lambda-\frac{4\pi G}{r_{g}^{2}}\left(Q^{2 }-\frac{2\alpha^{2}}{\eta}\right)\right)\). As in the case of the Einstein-Maxwell theory, in Eq. (135), the terms proportional to the variation \(\delta Q\) vanish. We obtain the integrable relation \(\delta S_{\rm H}=\left(2\pi r_{g}/G\right)\delta r_{g}\) and the proportionality coefficient \((2\pi r_{g}/G)\) does not depend on the electric charge \(Q\), and as the consequence, we obtain the area law Eq. (47) in spite of the existence of the two independent charges.
The variation of the Hamiltonian at the infinity \(r\rightarrow\infty\) yields Eq. (129), where \(\Phi_{H}=-4\pi A_{0}(r\rightarrow\infty)\) in our gauge condition describes the difference in the electric potential between the infinity \(r\rightarrow\infty\) and the horizon \(r=r_{g}\), and the mass of the system is given by
\[M_{\rm H}=M_{0}\left(1+\frac{4\pi G}{r_{g}^{2}}\left(Q^{2}-\frac{2\alpha^{2}} {\eta}\right)-\frac{\Lambda r_{g}^{2}}{3}\right), \tag{136}\]
which coincides with the total ADM mass of the BH spacetime. The conservation of the Hamiltonian (124) yields the first law of the thermodynamics for the charged BH (131) as in the Einstein-Maxwell theory.
### The reflection- and shift-symmetric model with the \(U(1)\)-invariant vector field
Finally, we consider the general reflection- and shift-symmetric class of the Horndeski theories with the minimally coupled \(U(1)\)-invariant vector field
\[G_{2} = g_{2}(X),\qquad G_{4}=\frac{1}{16\pi G}+g_{4}(X)\qquad G_{3}=G_{5}= 0,\qquad G_{A}=\mathcal{F}, \tag{137}\]
where \(g_{2}(X)\) and \(g_{4}(X)\) are general functions of the kinetic term \(X\), under which Eq. (32) reduces to
\[-\delta\left(r^{2}\sqrt{\frac{h}{f}}K^{[tr]}_{(\xi)}\right)-r^{2 }\sqrt{\frac{h}{f}}J^{[t}\xi^{r]} \tag{138}\] \[= r^{2}\sqrt{\frac{h}{f}}\Big{\{}-\Big{[}\frac{1}{8\pi Gr}+\frac{A _{0}(r)A_{0}^{\prime}(r)}{2h(r)}+2\frac{g_{4}\left(X_{0}(r)\right)+2f(r)\psi^ {\prime}(r)^{2}g_{4,X}\left(X_{0}(r)\right)-f(r)^{2}\psi^{\prime}(r)^{4}g_{4, XX}\left(X_{0}(r)\right)}{r}\Big{]}\delta f(r)\] \[+\frac{f(r)A_{0}(r)A_{0}^{\prime}(r)}{2h(r)^{2}}\delta h(r)-\frac {f(r)A_{0}(r)}{h(r)}\delta A_{0}^{\prime}(r)+\mathcal{J}^{r}\delta\psi(r)\] \[+\frac{4f(r)^{2}\psi^{\prime}(r)}{r}\left[-g_{4,X}\left(X_{0}(r) \right)+f(r)\psi^{\prime}(r)^{2}g_{4,XX}\left(X_{0}(r)\right)\right]\delta\psi^ {\prime}(r)\Big{\}},\]
where \(X_{0}(r):=-\left(f(r)/2\right)\psi^{\prime}(r)^{2}\) is the background value of the kinetic term \(X\) and the nontrivial radial component of the Noether current associated with the shift symmetry is given by
\[\mathcal{J}^{r} = -f(r)\psi^{\prime}(r)g_{2,X}\left(X_{0}(r)\right)+\frac{2f(r)}{r^{ 2}h(r)}\left[\left(-1+f(r)\right)h(r)+rf(r)h^{\prime}(r)\right]g_{4,X}\left(X_ {0}(r)\right) \tag{139}\] \[-\frac{2f(r)^{3}\left(h(r)+rh(r)^{\prime}\right)}{r^{2}h(r)}g_{4, XX}\left(X_{0}(r)\right).\]
We note that the model (132) corresponds to a particular case of the general model (137). Requiring that the background solution satisfies \(f(r)=h(r)\), the equation of motion for \(A_{0}(r)\) can be analytically integrated as
\[A_{0}(r)=Q\left(\frac{1}{r}-\frac{1}{r_{g}}\right), \tag{140}\]
where the integration constant \(Q\) represents the electric charge and we choose the integration constant to satisfy the gauge condition \(A_{0}(r_{g})=0\). We assume the existence of the charged BHs that can be expanded near the event horizon \(r=r_{g}\) as
\[f(r)=h(r) = h_{1}(r_{g},Q)(r-r_{g})+h_{2}(r_{g},Q)(r-r_{g})^{2}+\mathcal{O} \left[(r-r_{g})^{3}\right],\] \[\psi(r) = \psi_{1/2}(r_{g},Q)\sqrt{r-r_{g}}+\psi_{3/2}(r_{g},Q)(r-r_{g})^{ \frac{3}{2}}+\mathcal{O}\left[(r-r_{g})^{\frac{5}{2}}\right], \tag{141}\]
where the coefficients \(h_{i}(r_{g},Q)\) (\(i=1,2,\cdots\)) and \(\psi_{j}(r_{g},Q)\) (\(j=1/2,3/2,\cdots\)) are in general the functions of \(r_{g}\) and \(Q\), so that \(X\) takes a nonzero constant value at the horizon
\[X_{0}(r)=X_{0,0}+\mathcal{O}[r-r_{g}]:=-\frac{1}{8}h_{1}\psi_{1/2}^{2}+ \mathcal{O}[r-r_{g}]. \tag{142}\]
We require that the background solution satisfies \(\mathcal{J}^{r}=0\), so that the norm of the Noether current \(\mathcal{J}^{\mu}\mathcal{J}_{\mu}=(\mathcal{J}^{r})^{2}/h(r)\) remains finite in the horizon limit \(r\to r_{g}\), and then obtain at the leading order
\[2r_{g}^{2}g_{2,X}(X_{0,0})+4\left(1-r_{g}h_{1}\right)g_{4,X}(X_{0,0})+r_{g}h_ {1}^{2}\psi_{1/2}^{2}g_{4,XX}(X_{0,0})=0. \tag{143}\]
The variation of the Hamiltonian on the horizon \(r=r_{g}\) yields
\[\delta\mathcal{H}_{H}=T_{\text{H(H)}}\delta S_{\text{H}}\ =\ \frac{r_{g}h_{1}}{2G}\left[1+16\pi Gg_{4}(X_{0,0})+4\pi Gh_{1}\psi_{\frac{2}{ 2}}^{2}g_{4,X}(X_{0,0})\right]\delta r_{g}, \tag{144}\]
where the Hawking temperature (29) is given by \(T_{\text{H(H)}}=h_{1}/(4\pi)\). and hence
\[\delta S_{\text{H}}=\frac{2\pi r_{g}}{G}\left[1+16\pi Gg_{4}(X_{0,0})+4\pi Gh _{1}\psi_{\frac{2}{2}}^{2}g_{4,X}(X_{0,0})\right]\delta r_{g}. \tag{145}\]
Since the proportionality coefficient in Eq. (145) can depend on \(Q\) in general, the differential (145) may not be integrable [81]. However, since
\[\frac{\partial}{\partial Q}\left[\frac{\delta S_{\text{H}}}{\delta r_{g}} \right]=-32\pi^{2}r_{g}\left[g_{4,X}(X_{0,0})+2X_{0,0}g_{4,XX}(X_{0,0})\right] \frac{\partial X_{0,0}}{\partial Q}, \tag{146}\]
there are two cases where the differential of the entropy is integrable. The first case to satisfy the integrability condition \(\frac{\partial}{\partial Q}\left[\delta S_{\text{H}}/\delta r_{g}\right]=0\) is that
\[g_{4,X}(X_{0,0})+2X_{0,0}g_{4,XX}(X_{0,0})=0. \tag{147}\]
The second case to satisfy \(\frac{\partial}{\partial Q}\left[\delta S_{\text{H}}/\delta r_{g}\right]=0\) is given by
\[\frac{\partial X_{0,0}}{\partial Q}=0, \tag{148}\]
namely, the kinetic term evaluated on the horizon \(r=r_{g}\) does not depend on \(Q\). The condition (148) is essentially an extension of the result found for a particular choice of the \(g_{4}(X)\) function, namely \(g_{4}(X)=c^{\prime}X\) with \(c^{\prime}\) being constant, discussed in Ref. [81]. We note that the model discussed in Subsection VI.3 with \(g_{4}(X)=\alpha\sqrt{-X}\) satisfies both the conditions (147) and (148), since from Eq. (134) we find that \(X=-\frac{\alpha^{2}}{c^{\prime}\mathcal{J}_{\mu}^{2}}\) does not depend on \(Q\).
## VII Summary and conclusions
We have investigated thermodynamics of static and spherically symmetric BHs in the Horndeski theories. Although the Wald entropy formula has been useful for computing the BH entropy in the covariant gravitational theories which contain the dependence only on the Riemann tensors as the higher-derivative terms, this may not be directly applicable to the Horndeski theories because of the presence of the derivative interactions and the nonminimal derivative couplings of the scalar field to the spacetime curvature tensors. The terms which contain the spacetime curvature tensors may be eliminated with use of the properties of the Riemann tensor, and the apparent dependence of the action on the spacetime curvatures may be modified before and after a partial integration. Thus, following the original formulation by Iyer and Wald, we have employed the Noether charge potential associated with the differmorphism invariance. The variation of the Noether charge potential on the boundaries is related to the variation of the Hamiltonian. The variations of the Hamiltonian on the BH event horizon and at the spatial infinity, respectively, give rise to the differentials of the entropy of the BH and the total mass of the system, and the conservation of the total Hamiltonian leads to the first law of the BH thermodynamics. Our formulation could be applied to the whole of the Horndeski theories including the EsGB theories and the shift-symmetric theories which provide the stealth Schwarzschild BH solutions with the linearly time-dependent scalar field. In the case of the EsGB theories, our formulation has recovered the standard Wald entropy formula, although the description of the EsGB theory in the context of the Horndeski theories appears to be different from the original action by the difference in the total derivative terms.
We have divided our analysis into the two parts. The first part is about the static and spherically symmetric BH solutions with the static scalar field in the Horndeski theories which may not be shift symmetric. The second part is about those with the linearly time-dependent scalar field in the shift-symmetric Horndeski theories. In the latter case, in order to satisfy the radial-temporal component of the gravitational equations, the radial component of the Noether current associated with the shift symmetry has to vanish. Taking this into consideration, we showed that the variation of the Noether charge potential associated with the diffeomorphism invariance does not depend on time, even if the scalar field has a linear time dependence. This reflects the fact that in such static and spherically symmetric BH solutions there was no radial heat flux onto the BH horizon.
The results in the former part are summarized in Table 1.
Besides GR and the conventional ST theory with the trivial scalar field, we evaluated the BH entropy and the total mass of the system for the static and spherically symmetric BHs with nontrivial profile of the scalar field in the shift-symmetric EsGB theory and in the shift-symmetric theory where the function \(G_{4}(X)\) contains the term proportional to \(\sqrt{-X}\). In both cases, we showed that the BH entropy was given by the area law despite the existence of the nontrivial profile of the scalar field.
The results in the latter part are summarized in Table 2. We have studied the BH entropy and the mass in the stealth Schwarzschild solution and the Schwarzschild-(A)dS solution with the linearly time dependent scalar field. In both cases, we have found that the BH entropy does not obey the area law and the total mass of the system does not coincide with the BH mass from the metric.
In Theory V and Theory VI in Table 2, there exists a trivial Schwarzschild solution without scalar field. Then
\begin{table}
\begin{tabular}{|c|c||c||c|c|c|c|} \hline \hline & Theory & BH & Scalar field & Temperature \(T_{\mathsf{H}}\) & Mass \(M\) & Entropy \(S\) \\ \hline \hline \multirow{2}{*}{I} & GR without \(\Lambda\) & Schwarzschild & trivial & \(T_{0}:=\frac{1}{4\pi\tau_{s}}\) & \(M_{0}:=\frac{\tau_{s}}{2G}\) & \(S_{0}:=\frac{\tau\tau_{s}^{2}}{G}\) \\ \cline{2-7} & GR with \(\Lambda\) & Schwarzschild(A)dS & trivial & \(\left(1-\frac{\lambda}{2\tau_{s}^{2}}\right)T_{0}\) & \(\left(1-\frac{\lambda}{2\tau_{s}^{2}}\right)M_{0}\) & \(S_{0}\) \\ \hline \hline \multirow{2}{*}{II} & conventional Scalar-Tensor theories & Schwarzschild(A)dS & \(\phi=0\) & \(\left(1-V(0)\tau_{g}^{2}\right)T_{0}\) & \(\left(1-\frac{V(0)}{\tau_{g}^{2}}\right)M_{0}\) & \(S_{0}\) \\ \hline \multirow{2}{*}{III} & \multirow{2}{*}{non-shift-symmetric EsGB} & \multirow{2}{*}{\(k^{(1)}(\phi_{0})=0\)} & asymptotically-flat & hairy & \(\frac{1}{4\pi\sqrt{\gamma}\hbar_{1}\,\text{\rm{1}}}(\text{Eqs.~{}\ref{eq:1}} (\ref{eq:1})(\ref{eq:2}))\) & ADM mass & \(\left(1+\frac{6\pi\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G} \text{G}\text{G}}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G} \text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G} \text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G} \text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G} \text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G} \text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G} \text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G} \text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G} \text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G} \text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G} \text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G} \text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G} \text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G} \text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G} \text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G} \text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G} \text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G} \text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G} \text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G} \text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G} \text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G} \text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G} \text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G} \text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G} \text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G} \text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G}\text{G} \
we have discussed the thermodynamic stability of the stealth Schwarzschild BHs. We have shown that its stability depends on the sign of the nonminimal derivative coupling to the spacetime curvature. In the case of the Schwarzschild-dS BH, we have shown the Horndeski Schwarzschild-dS BHs are always thermodynamically unstable and transit to the GR Schwarzschild-dS BH. In the case of the Schwarzschild-AdS BHs, we have found that the thermodynamical phase diagram becomes more complicated than the previous case, because of the existence of the Hawking-Page phase transition, and crucially depends on the ratio of the (effective) cosmological constants, i.e., the ratio of the Horndeski AdS radius \(\bar{\ell}\) and the GR AdS radius \(\ell\), where we always have \(\bar{\ell}<\ell\). We have shown that in the case that \(\bar{\ell}\) is not much less than \(\ell\), the Horndeski Schwarzschild-AdS BH is always thermodynamically unstable and decays into either the GR Schwarzschild-AdS BH or the AdS spacetime filled with thermal radiation. On the other hand, in the case that the ratio of \(\bar{\ell}\) to \(\ell\) is smaller than a critical value, there is a certain range of the BH mass where the Horndeski Schwarzschild-AdS BH is thermodynamically more stable than the GR Schwarzschild-AdS BH, while for the BH mass larger than that in this range the Horndeski Schwarzschild-AdS BH decays into either the GR Schwarzschild-AdS BH or the AdS spacetime with thermal radiation.
While BH solutions discussed so far contain only one independent charge, i.e., the mass or equivalently the horizon radius, in Section VI we have briefly discussed thermodynamics in the BHs with two independent charges in the Horndeski theories. More concretely, we have focused on the Horndeski theories minimally coupled to the \(U(1)\)-invariant vector field, where BH solutions contain the two independent charges, the mass and the electric charge. By extending the general formulation presented in Section II, we have derived the Noether change potential associated with the diffeomorphism invariance, including the contribution of the \(U(1)\)-invariant vector field with the nonlinear kinetic term. As a concrete example of charged BH solutions in the Horndeski theories, we have considered the extension of the irrational coupling model discussed in Sec. III.4 minimally coupled to the Maxwell field, and showed that in spite of the presence of the two independent charges the differential of the entropy is integrable and the ordinary area law is recovered. Finally, in the general reflection- and shift-symmetric class of the Horndeski theories with the minimally coupled \(U(1)\)-invariant vector field, we have clarified the general conditions under which the differential of the BH entropy is integrable in the presence of the two independent charges. We have shown that in the case that the kinetic term of the scalar field evaluated on the horizon does not depend on the electric charge the differential of the BH entropy is integrable.
There would be various extensions of our present work, which include the cases of the stationary and axisymmetric BHs in the Horndeski theories and the nontrivial BHs in the healthy ST theories beyond the Horndeski theories [19; 20; 21]. We hope to come back to these cases in our future work.
###### Acknowledgements.
M.M. was supported by the Portuguese national fund through the Fundacao para a Ciencia e a Tecnologia in the scope of the framework of the Decree-Law 57/2016 of August 29, changed by Law 57/2017 of July 19, and the Centro de Astrofisica e Gravitacao through the Project No. UIDB/00099/2020. K.M. would like to acknowledges the Yukawa Institute for Theoretical Physics at Kyoto University, where the present work was begun during the Visitors Program
of FY2021. He would also thank CENTRA/Instituto Superior Tecnico, where some of this work was performed during his stay. This work was supported in part by JSPS KAKENHI Grant Numbers JP17H06359, JP19K03857.
| We thermodynamic properties of static and spherically symmetric black holes(BHs) in Horndeski theories. Because higher-order derivative interactions and nonminimal derivative couplings of the scalar field, the standard Wald entropy formula may not be directly applicable. Therefore, following the original formulation by Iyer and Wald, we obtain the differentials of the BH entropy and the total mass of the system in the Horndeski theories, which lead to the first law of thermodynamics via the conservation of the Hamiltonian. Our formulation covers the case of the static and spherically symmetric BH solutions with the static scalar field and those with the linearly time-dependent scalar field in the shift-symmetric Horndeski theories. We then apply our results to explicit BH solutions in the Horndeski theories. In the case of the conventional scalar-tensor theories and the Einstein-scalar-Gauss-Bonnet theories, we recover the BH entropy obtained by the Wald entropy formula. In the shift-symmetric theories, in the case of the BH |
2306.06241 | Almost paratopological groups | A class of almost paratopological groups is introduced, which (1) contains
paratopological groups and Hausdorff quasitopological groups; (2) is closed
under products; (3) subgroups. Almost paratopological $T_1$ groups $G$ are
characterized by the fact that $\{(x,y)\in G^2: xy=e\}$ is closed in $G^2$. A
compact almost paratopological group is topological. A regular $\Sigma$-space
with a countable extend and a separately continuous Mal'tsev operation is
$\omega$-cellular (and ccc). A $\sigma$-compact regular almost paratopological
group is ccc. In particular, a $\sigma$-compact regular quasitopological group
is ccc. | Evgenii Reznichenko | 2023-06-09T20:27:33 | http://arxiv.org/abs/2306.06241v2 | # Almost paratopological groups
###### Abstract
A class of almost paratopological groups is introduced, which (1) contains paratopological groups and Hausdorff quasitopological groups; (2) is closed under products; (3) subgroups. Almost paratopological \(T_{1}\) groups \(G\) are characterized by the fact that \(\{(x,y)\in G^{2}\,:\,xy=e\}\) is closed in \(G^{2}\). A compact almost paratopological group is topological. A regular \(\Sigma\)-space with countable extend and a separately continuous Mal'tsev operation is \(\omega\)-cellular (and ccc). A \(\sigma\)-compact regular almost paratopological group is ccc. In particular, a \(\sigma\)-compact regular quasitopological group is ccc.
keywords: almost paratopological groups, topological groups, paratopological groups, semitopological groups, quasitopological groups, Mal'tsev spaces, ccc spaces, \(\omega\)-cellular spaces +
Footnote †: journal:
## 1 Introduction
One of the most important concepts in mathematics is the concept of a topological group. A topological group is a group with a topology with respect to which the operations of product and inverse are continuous. Other groups with topology are also important. Groups \(G\) with topology is called
* _paratopological_ if multiplication in \(G\) is continuous;
* _semitopological_ if multiplication in \(G\) is separately continuous;
* _quasitopological_ if \(G\) is a semitopological group and the operation taking the inverse element \(x\mapsto x^{-1}\) is continuous.
In the book [1] and the survey [9] one can find further information about the enumerated classes of groups. This article introduces the concept of almost paratopological group, a class of semitopological groups that contains paratopological groups and Hausdorff quasitopological groups. This class of semitopological groups is closed under the product and subgroups (Theorem 5). For them,
assertions are proved that were previously known for paratopological groups. A compact almost paratopological group is topological (Theorem 6). In particular, a compact paratopological group is topological (5, Lemma 6).
A mapping \(M:X^{3}\to X\) is called a _Mal'tsev operation_ if
\[M(x,y,y)=M(y,y,x)=x\]
for all \(x,y\in X\). There is a natural Mal'tsev operation on groups:
\[M_{G}(x,y,z)=xy^{-1}z\]
for \(x,y,z\in G\). On retracts of sets with the Mal'tsev operation, there is a Mal'tsev operation: if \(r:X\to X\) is a retraction, then we set \(M^{\prime}(x,y,z)=r(M(x,y,z))\) for \(x,y,z\in r(X)\). A topological space with a continuous Mal'tsev operation is called a _Mal'tsev spaces_. On topological groups, the Mal'tsev operation \(M_{G}\) is continuous, and on quasitopological groups \(M_{G}\) is separately continuous.
In [6] it was proved that if \(X\) is a Lindelof \(\Sigma\) or a countably compact Tychonoff space with a separately continuous Mal'tsev operation, then \(X\) is an \(\omega\)-cellular space (in particular, \(X\) is ccc). It was also noted there that this assertion can be proved for Tychonoff \(\Sigma\)-spaces with countable extend. This article proves that if \(X\) is a regular \(\Sigma\)-space with countable extend with a separately continuous Mal'tsev operation, then \(X\) is an \(\omega\)-cellular space (Theorem 7).
The above facts imply that a regular quasitopological \(\Sigma\) group with countable extend is \(\omega\)-cellular. Based on this fact, it is proved that the Lindelof \(\Sigma\) almost paratopological group is \(\omega\)-cellular (Theorem 10). In particular, an \(\sigma\)-compact regular quasitopological or almost paratopological group is ccc.
## 2 Definitions and notation
The identity in the group \(G\) will be denoted as \(e\), the family of open neighborhoods \(e\) as \(\mathcal{N}_{e}\). Denote the multiplication in the group
\[\mathfrak{m}:G\times G\to G,\ (x,y)\mapsto xy\]
and operation taking the inverse element
\[\mathfrak{i}:G\to G,\ x\mapsto x^{-1}.\]
Let \(X\) be a topological space. Then two points \(x\) and \(y\) in \(X\) are _topologically indistinguishable_ if they have exactly the same neighborhoods; that is, \(x\in\overline{\{y\}}\) and \(y\in\overline{\{x\}}\). Points \(x\) and \(y\) in \(X\) are _topologically distinguishable_ if they are not topologically indistinguishable.
The space \(X\) has _countable extend_ if every discrete closed subset is at most countable.
A space \(X\) is _ccc_, or has the _Suslin property_, if every disjoint family of nonempty open sets in \(X\) is at most countable. Let us say that a space \(X\) is \(\omega\)_-cellular_ if for any family \(\lambda\) of \(G_{\delta}\)-subsets of \(X\) there exists a countable subfamily \(\mu\subset\lambda\) such that \(\overline{\bigcup\lambda}=\overline{\bigcup\mu}\). Clearly \(\omega\)-cellular spaces are ccc.
A space \(X\) is called a _\(\Sigma\)-space_ if there exists a \(\sigma\)-locally finite family \(\mathcal{S}\) and the covering of \(\mathcal{C}\) by closed countably compact sets, such that if \(C\in\mathcal{C}\) and \(U\supset C\) is open, then \(C\subset S\subset U\) for some \(S\in\mathcal{S}\). Lindelof \(\Sigma\)-spaces are exactly the regular continuous images of perfect inverse images of metrizable separable spaces. \(\sigma\)-countably compact spaces are \(\Sigma\)-spaces. Regular \(\sigma\)-compact spaces are Lindelof \(\Sigma\)-spaces.
**Statement 1**.: _A space \(X\) is a \(\Sigma\)-space with countable extend if and only if there is a countable family \(\mathcal{S}\) and the covering of \(\mathcal{C}\) by closed countably compact sets, such that if \(C\in\mathcal{C}\) and \(U\supset C\) are open, then \(C\subset S\subset U\) for some \(S\in\mathcal{S}\)._
Proof.: Let \(\mathcal{S}\) be a \(\sigma\)-discrete family and \(\mathcal{C}\) be a covering of closed countably compact sets, such that if \(C\in\mathcal{C}\) and \(U\supset C\) are open, then \(C\subset S\subset U\) for some \(S\in\mathcal{S}\). Since \(X\) has a countable extend, then \(\mathcal{S}\) is countable. Conversely, it suffices to note that a countable family is a \(\sigma\)-discrete family.
A space is _\(\sigma\)-countably compact_ if it is the union of a countable family of countably compact subspaces.
Let us call a space _closely \(\sigma\)-countably compact_ if it is the union of a countable family of closed countably compact subspaces. Any closely \(\sigma\)-countably compact space is a \(\Sigma\)-space with countable extend.
## 3 \(R_{0}\) spaces and groups
Let \(X\) and \(Y\) be topological spaces. We say that a map \(f:X\to Y\)_preserves the topology_ (\(f\) is _topology-preserving_) if
* the mapping \(f\) is surjective, continuous, open and closed, and the set \(U\subset X\) is open if and only if \(U=f^{-1}(f(U))\) and \(f(U)\) open.
It is easy to verify the following assertion.
**Proposition 1**.: _Let \((X,\mathcal{T}_{X})\) and \((Y,\mathcal{T}_{Y})\) be topological spaces, \(f:X\to Y\) be a surjective mapping. The following conditions are equivalent:_
1. \(f\) _preserves the topology;_
2. _mapping_ \[f^{\#}:\mathcal{T}_{Y}\to\mathcal{T}_{X},\ U\mapsto f^{-1}(U)\] _is a bijection;_
3. \(f\) _is quotient and the subspace_ \(f^{-1}(y)\) _is antidiscrete for every_ \(y\in Y\)_._
The relation '\(x\) and \(y\) are topologically indistinguishable' on \(X\) is an equivalence relation, denote this equivalence as \(\sim_{T}\): \(x\!\sim_{T}y\) if and only if \(x\in\overline{\{y\}}\) and \(y\in\overline{\{x\}}\).
The space quotient by this equivalence relation is denoted as \(X_{T_{0}}\) and \(\pi_{T_{0}}:X\to X_{T_{0}}\) is the quotient mapping. The space \(X_{T_{0}}\) is a \(T_{0}\) space and the mapping \(f=\pi_{T_{0}}:X\to Y=X_{T_{0}}\) preserves the topology. If \(X\) is a \(T_{0}\) space, then \(\pi_{T_{0}}\) is a homeomorphism. Recall the axiom of separation of \(R_{0}\) topological spaces.
The space \(X\) is \(R_{0}\), or _symmetric_, if \(x\in\overline{\{y\}}\) implies \(y\in\overline{\{x\}}\) (that is, \(x\!\sim_{T}y\)) for any \(x,y\in X\)[7].
The space \(X\) is \(R_{0}\) if and only if \(X_{T_{0}}\) is \(T_{1}\). See Section 16.2 in [7].
### Arkhangel'skii-Motorov theorem
In 1984 D.B. Motorov constructed a famous example of a compact space that is not a retract of any compact space. This is the closure on the plane of the graph of the function \(\sin\frac{1}{x}\) with domain \((0,1]\) ([4]). A little later, A.V. Arkhangelsky published Motorov's improved results [2]. We need the following two statements from their papers.
**Theorem 1** (Arkhangel'skii [2, Corollary 1]).: _Let \(X\) be a homogeneous space and \(\overline{\{x\}}\) is compact for \(x\in X\). Then \(X\) is a \(R_{0}\) space._
**Theorem 2** (Motorov).: _If \(X\) is a homogeneous compact space, then \(X\) is an \(R_{0}\) space._
### Topology-preserving group homomorphisms
**Proposition 2**.: _Let \(G\) be a semitopological group, \(H\) a group with topology, and \(\varphi:G\to H\) a surjective quotient homomorphism. Then \(\varphi\) is open and \(H\) is a semitopological group._
Proof.: Let \(U\) be an open subset of \(X\). Because
\[\varphi^{-1}(\varphi(U))=U\ker\varphi\]
and right shifts are homeomorphisms, then \(\varphi^{-1}(\varphi(U))\) and hence \(\varphi(U)\) are open.
Let \(V\) be an open subset of \(H\), \(h\in H\) and \(g\in\varphi^{-1}(h)\). Because
\[\varphi^{-1}(hV)=g\varphi^{-1}(V)\qquad\qquad\text{and}\qquad\quad\varphi^{-1 }(Vh)=\varphi^{-1}(V)g,\]
then the sets \(hV\) and \(Vh\) are open.
Propositions 1 and 2 imply the following assertion.
**Proposition 3**.: _Let \(G\) be a semitopological group, \(H\) a group with topology, and \(\varphi:G\to H\) a surjective homomorphism. The following conditions are equivalent:_
1. \(\varphi\) _preserves the topology;_
2. _the mapping_ \(\varphi\) _is quotient and_ \(\ker\varphi\) _is antidiscrete._
_If \(\varphi\) preserves the topology, then \(H\) is a semitopological group._
The following theorem is practically the same as (8, Theorem 3.1).
**Theorem 3**.: _Let \(G\) be a semitopological group, \(H=\overline{\{e\}}\cap\overline{\{e\}}^{-1}\). Then \(H\) is a normal antidiscrete subgroup, the quotient group \(G/H\) coincides with \(G_{T_{0}}\), \(G/H\) is a semitopological \(T_{0}\) group, and the quotient mapping \(G\to G/H\) is topology-preserving._
Proof.: Let \(x\in G\). Then \(x\in H\Leftrightarrow x\in\overline{\{e\}}\) and \(x^{-1}\in\overline{\{e\}}\Leftrightarrow x\in\overline{\{e\}}\) and \(e\in\overline{\{x\}}\Leftrightarrow e\sim_{T}x\). Hence \(H=\pi_{T_{0}}^{-1}(\pi_{T_{0}}(e))\).
The equivalence relation \(\sim_{T}\) is invariant under right and left shifts, so the quotient set by this equivalence relation coincides with the right (and left) cosets with respect to the normal subgroup \(H\). It remains to apply Proposition 3.
Theorems 3 and 1 imply the following assertion.
**Theorem 4**.: _Let \(G\) be a semitopological group, \(H=\overline{\{e\}}\) be compact. Then \(G\) is the \(R_{0}\) space, \(H\) is a normal closed antidiscrete subgroup, the quotient group \(G/H\) coincides with \(G_{T_{0}}\), \(G/H\) is a semitopological \(T_{1}\) group, and the quotient mapping \(G\to G/H\) is topology-preserving._
## 4 Almost paratopological groups
**Definition 1**.: A semitopological group \(G\) is called _almost paratopological_, if for any \(g\in G\) such that \(e\notin\overline{\{g\}}\) there exists a neighborhood \(U\) of \(e\) such that \(g\notin U^{2}\).
**Theorem 5**.: _Let \(G\) be a semitopological group. If any of the following conditions is satisfied, then \(G\) is almost paratopological._
1. \(G\) _is a subgroup of an almost paratopological group;_
2. \(G\) _is the product of almost paratopological groups;_
3. \(G\) _is a paratopological group;_
4. \(G\) _is a Hausdorff quasitopological group;_
5. _there exists a continuous isomorphism of_ \(G\) _onto a_ \(T_{1}\) _almost paratopological group._
Proof.: Conditions (1) and (3) are obvious.
(2) Let \(G=\prod_{\alpha\in A}G_{\alpha}\), where \(G_{\alpha}\) is an almost paratopological group for all \(\alpha\in A\). Let \(g=(g_{\alpha})_{\alpha\in A}\in G\) and \(e=(e_{\alpha})_{\alpha\in A}\notin\overline{\{g\}}\). Then \(e_{\alpha}\notin\overline{\{g_{\alpha}\}}\) for some \(\alpha\in A\). There is a neighborhood \(U_{\alpha}\) of \(e_{\alpha}\) such that \(g_{\alpha}\notin U_{\alpha}^{2}\). Let \(U\) be the inverse image of \(U_{\alpha}\) under the projection of \(G\) onto \(G_{\alpha}\). Then \(g\notin U^{2}\).
(4) Let \(g\in G\setminus\{e\}\). There is \(U\in\mathcal{N}_{e}\) for which \(U=U^{-1}\) and \(U\cap gU=\varnothing\). Then \(g\notin U^{2}\).
(5) Let \(\varphi:G\to H\) be a continuous isomorphism of the group \(G\) onto a \(T_{1}\) almost paratopological group \(H\). Let \(g\in G\setminus\{e\}\). There is a neighborhood \(V\subset H\) of \(\varphi(e)\) such that \(\varphi(g)\notin V^{2}\). Then \(g\notin U^{2}\), where \(U=\varphi^{-1}(V)\).
**Example 1**.: Let \(G\) be the group of integers with topology consisting of co-finite sets. The group \(G\) is a quasitopological compact \(T_{1}\) group which is not almost paratopological.
**Proposition 4**.: _Let \(G\) be a semitopological group, \(M\subset G\). Then_
\[\overline{M}=\bigcap_{U\in\mathcal{N}_{e}}MU^{-1}=\bigcap_{U\in\mathcal{N}_{e }}U^{-1}M.\]
Proof.: Let \(g\in\overline{M}\) and \(U\in\mathcal{N}_{e}\). Then \(gU\cap M\neq\varnothing\) and \(g\in MU^{-1}\). Hence \(\overline{M}\subset\bigcap_{U\in\mathcal{N}_{e}}MU^{-1}\). Let \(g\in\bigcap_{U\in\mathcal{N}_{e}}MU^{-1}\). Then \(gU\cap M\) for any \(U\in\mathcal{N}_{e}\). Hence \(\bigcap_{U\in\mathcal{N}_{e}}MU^{-1}\subset\overline{M}\). \(\overline{M}=\bigcap_{U\in\mathcal{N}_{e}}U^{-1}M\) is proved similarly.
For a group \(G\) we denote
\[S_{G}=\{(x,y)\in G\times G\,:\,xy=e\},\qquad\qquad E_{G}=\bigcap_{U\in\mathcal{N}_ {e}}\overline{U^{-1}}.\]
**Proposition 5**.: _Let \(G\) be a semitopological group. Then_
1. \[\overline{\{e\}}\subset E_{G}=\bigcap_{U\in\mathcal{N}_{e}}\left(U^{-1}\right) ^{2};\]
2. _the group_ \(G\) _is almost paratopological if and only if_ \(E_{G}=\overline{\{e\}}\)_;_
3. \(\overline{S_{G}}=\mathfrak{m}^{-1}(E_{G})\)_;_
4. _the following conditions are equivalent:_ 1. \(G\) _is an almost paratopological_ \(T_{1}\) _group;_ 2. \(E_{G}=\{e\}\)_;_ 3. \(S_{G}\) _is closed in_ \(G^{2}\)_._
Proof.: Denote \(Q=\bigcap_{U\in\mathcal{N}_{e}}\left(U^{-1}\right)^{2}\).
(1). Proposition 4 implies that \(\overline{\{e\}}\subset E_{G}\subset Q\). Let \(x\in G\setminus E_{G}\). Then \(xU\cap U^{-1}=\varnothing\) for some \(U\in\mathcal{N}_{e}\). Hence \(x\notin(U^{-1})^{2}\supset Q\). We get \(Q\subset E_{G}\).
(2). Assume that \(G\) is an almost paratopological group. It follows from (1) that \(\overline{\{e\}}\subset E_{G}\). Let \(g\in G\setminus\overline{\{e\}}\). Then \(e\notin\overline{\{g^{-1}\}}\). Since \(G\) is almost paratopological, it follows that \(g^{-1}\notin U^{2}\) for some \(U\in\mathcal{N}_{e}\). Then \(g\notin(U^{-1})^{2}\supset E_{G}\). Hence \(E_{G}\subset\overline{\{e\}}\).
Suppose \(E_{G}=\overline{\{e\}}\). Let \(g\in G\) and \(e\notin\overline{\{g\}}\). Then \(g^{-1}\notin\overline{\{e\}}=E_{G}\). It follows from (1) that \(g^{-1}\notin(U^{-1})^{2}\) for some \(U\in\mathcal{N}_{e}\). We get, \(g\notin U^{2}\).
(3). Let \((x,y)\in G\). Then \((x,y)\in\overline{S_{G}}\Leftrightarrow(Ux\times Uy)\cap S_{G}\) for all \(U\in\mathcal{N}_{e}\Leftrightarrow e\in UxUy\) for all \(U\in\mathcal{N}_{e}\Leftrightarrow e\in UUxy\) for all \(U\in\mathcal{N}_{e}\Leftrightarrow xy\in(U^{-1})^{2}\) for all \(U\in\mathcal{N}_{e}\Leftrightarrow xy\in Q\). It follows from (1) that \((x,y)\in\overline{S_{G}}\Leftrightarrow xy\in E_{G}\).
(4). From (2) follows (a)\(\Leftrightarrow\)(b). Since \(S_{G}=\mathfrak{m}^{-1}(e)\), then (3) implies (b)\(\Leftrightarrow\)(c).
For a group with the topology \((G,\mathcal{T})\), we denote
\[\mathcal{T}_{Sym}=\{U\cap V^{-1}\,:\,U,V\in\mathcal{T}\}\]
and \(\operatorname{Sym}G=(G,\mathcal{T}_{Sym})\). Obviously, \(\mathcal{T}_{Sym}\) is a topology that is stronger than \(\mathcal{T}\).
**Proposition 6**.: _Let \(G\) be a group with a topology. Then_
1. _if_ \(G\) _is a semitopological group, then_ \(\operatorname{Sym}G\) _is a quasitopological;_
2. _if_ \(G\) _is a paratopological group, then_ \(\operatorname{Sym}G\) _is topological;_
3. \(\operatorname{Sym}G\) _is homeomorphic to_ \(S_{G}\)_;_
4. _if_ \(G\) _is an almost paratopological_ \(T_{1}\) _group, then_ 1. \(\operatorname{Sym}G\) _embeds closed in_ \(G^{2}\)_;_ 2. \(\operatorname{Sym}G\) _is a Hausdorff quasitopological group._
Proof.: Let \(\mathcal{T}\) be the topology of \(G\) and \(\mathcal{T}_{Sym}\) be the topology of \(\operatorname{Sym}G\).
(1). If \(g\in G\), \(U,V\in\mathcal{T}\) and \(W=U\cap V^{-1}\in\mathcal{T}_{Sym}\) then
\[gW =gU\cap(Vg^{-1})^{-1}\in\mathcal{T}_{Sym},\] \[Wg =Ug\cap(g^{-1}V)^{-1}\in\mathcal{T}_{Sym},\] \[W^{-1} =V\cap U^{-1}\in\mathcal{T}_{Sym}.\]
Hence \(\operatorname{Sym}G\) is quasitopological.
(2). Follows from (1).
(3). Let's put
\[\mathfrak{j}:\operatorname{Sym}G\to S_{G},\ x\mapsto(x,x^{-1}).\]
The mapping \(\mathfrak{j}\) is a homeomorphism since
\[\mathfrak{j}^{-1}(S_{G}\cap(U\times V))=U\cap V^{-1}.\]
for \(U,V\in\mathcal{T}\).
(4). From (3) and Proposition 5(4) follows (a). It follows from (1) that \(G\) is a quasitopological group. Let us show that \(\operatorname{Sym}G\) is a Hausdorff space. Let \(e\neq g\in G\). Since \(G\) is an almost paratopological \(T_{1}\) group, then \(g\notin U^{2}\) for some \(U\in\mathcal{N}_{e}\). Let \(S=U\cap U^{-1}\). Then \(S\) is an open neighborhood of the identity in \(\operatorname{Sym}G\) and \(eS\cap gS=\varnothing\).
**Proposition 7**.: _Let \(G\) and \(H\) be groups with topology and \(\varphi:G\to H\) be a topology-preserving homomorphism. Let \(\mathcal{P}\) be one of the enumerated classes of groups: semitopological; quasitopological; paratopological; almost paratopological; compact. Then \(G\in\mathcal{P}\) if and only if \(H\in\mathcal{P}\)._
**Theorem 6**.: _A compact almost paratopological group is a topological group._
Proof.: Let \(Q\) be a compact almost paratopological group. We set \(H=\overline{\{e\}}\). Theorem 4 implies that \(H\) is a normal closed antidiscrete subgroup, the quotient group \(G=Q/H\) is a \(T_{1}\) compact semitopological group, and the quotient mapping \(\varphi:Q\to G\) is a topology-preserving homomorphism. The Proposition 7 implies that \(G\) is a compact almost paratopological \(T_{1}\) group and \(G\) is a topological group if and only if \(Q\) is a topological group. Therefore, to prove the theorem, it suffices to prove that \(G\) is a topological group.
It follows from the Proposition 6(1) that \(\operatorname{Sym}G\) is quasitopological. The Proposition 6(4)(a) implies that \(\operatorname{Sym}G\) embeds in \(G^{2}\) in a closed manner, and hence the group \(\operatorname{Sym}G\) is compact. It follows from the Proposition 6(4)(b) that \(\operatorname{Sym}G\) is Hausdorff. Hence \(\operatorname{Sym}G\) is a compact Hausdorff semitopological group. It follows from the Ellis theorem [3, Theorem 2] that \(\operatorname{Sym}G\) is a topological group.
Let \(\mathcal{T}\) be the topology of \(G\) and \(\mathcal{T}_{Sym}\) be the topology of \(\operatorname{Sym}G\).
Let us show that \(G\) is a Hausdorff space. Let \(e\neq g\in G\). Since \(G\) is an almost paratopological \(T_{1}\) group, it follows from Proposition 5(4) that \(E_{G}=\{e\}\) and hence \(g\notin\overline{U^{-1}}\) for some \(U\in\mathcal{N}_{e}\).
**Claim**.: \(e\in\operatorname{Int}\overline{U^{-1}}\)_._
Proof.: Since \(\operatorname{Sym}G\) is a topological group and \(U^{-1}\in\mathcal{T}_{Sym}\), then \(S^{2}\subset U^{-1}\) for some \(S\in\mathcal{T}_{Sym}\) for which \(e\in S=S^{-1}\). Since \(\operatorname{Sym}G\) is compact, then \(G=\bigcup_{i=1}^{n}x_{i}S\) for some \(x_{1},x_{2},...,x_{n}\in G\). A topological space cannot be the union of a finite number of nowhere dense sets, so \(\operatorname{Int}\overline{x_{i}S}\neq\varnothing\) for some \(i\). Hence \(\operatorname{Int}\overline{S}\neq\varnothing\). Let \(q\in S\cap\operatorname{Int}\overline{S}\). Then \(e\in\operatorname{Int}\overline{q^{-1}S}\subset\overline{S^{2}}\subset \overline{U^{-1}}\). Hence \(e\in\operatorname{Int}\overline{U^{-1}}\).
Set \(U_{g}=G\setminus\overline{U^{-1}}\) and \(U_{e}=\operatorname{Int}\overline{U^{-1}}\). Then \(g\in U_{g}\in\mathcal{T}\), \(e\in U_{e}\in\mathcal{T}\) and \(U_{g}\cap U_{e}=\varnothing\). Thus, the space \(G\) is Hausdorff. The group \(G\) is a Hausdorff compact semitopological group. It follows from Ellis [3, Theorem 2] that \(G\) is a topological group.
Theorems 5 and 6 imply the following assertion.
**Corollary 1** ([5, Lemma 6]).: _A compact paratopological group is a topological group._
## 5 Spaces with separately continuous Mal'tsev operation
For the space \(X\) we define the following property:
1. Let \(\{x_{\alpha}\,:\,\alpha<\omega_{1}\}\subset X\) and for \(\alpha<\omega_{1}\) let \(\mathcal{F}_{\alpha}\) be at most a countable family of closed subsets of \(X\). Then there exists \(\beta<\omega_{1}\) for which the following condition is satisfied: 1. exists \(y\in\overline{\{x_{\alpha}\,:\,\alpha<\beta\}}\) such that if \(\gamma<\beta\), \(F\in\mathcal{F}_{\gamma}\) and \(x_{\beta}\in F\), then \(y\in F\).
The (U\({}_{\text{s}}\)) property is a strengthening of the (P) property from [6, 10] in the class of Tikhonov spaces. The (U\({}_{\text{s}}\)) property can be used for regular spaces.
**Proposition 8**.: _Let \(X\) be a regular \(\Sigma\)-space with countable extend. Then (U\({}_{\text{s}}\)) is true for \(X\)._
Proof.: Let \(\mathcal{S}\) and \(\mathcal{C}\) be as in Statement 1. We can assume that the family \(\mathcal{S}\) is closed under finite intersections. Let \(\{x_{\alpha}\,:\,\alpha<\omega_{1}\}\subset X\) and for \(\alpha<\omega_{1}\) let \(\mathcal{F}_{\alpha}\) be at most a countable family of closed subsets of \(X\). Denote
\[F_{\gamma}^{*} =\{\bigcap\mathcal{F}\,:\,\mathcal{F}\subset\bigcup_{\alpha< \gamma}\mathcal{F}_{\alpha},\ |\mathcal{F}|<\omega,\bigcap\mathcal{F}\neq\varnothing\},\] \[X_{\gamma} =\{x_{\alpha}\,:\,\alpha<\gamma\}\]
for \(\gamma\leq\omega_{1}\). By induction on \(n\in\omega\) we construct an increasing sequence of countable ordinals \((\beta_{n})_{n\in\omega}\) such that for \(n>0\) the following condition is satisfied:
1. if \(x_{\gamma}\in S\cap F\) for \(\gamma<\omega_{1}\), \(S\in\mathcal{S}\) and \(F\in\mathcal{F}_{\beta_{n-1}}^{*}\), then there exists \(y\in\overline{X_{\beta_{n}}}\) such that \(y\in S\cap F\).
Let \(\beta_{0}=\omega\). Suppose that \(n>0\) and \(\beta_{0}<\beta_{1}<...<\beta_{n-1}<\omega_{1}\) are constructed. For \(S\in\mathcal{S}\) and \(F\in\mathcal{F}_{\beta_{n-1}}^{*}\) we denote
\[A_{S,F}=\{\alpha<\omega_{1}\,:\,x_{\alpha}\in S\cap F\}.\]
If \(A_{S,F}\neq\varnothing\), then we denote \(\alpha_{S,F}=\min A_{S,F}\). Let us put
\[\beta_{n}=\sup\{\alpha_{S,F}\,:\,S\in\mathcal{S},F\in\mathcal{F}_{\beta_{n-1}} ^{*}\text{ and }A_{S,F}\neq\varnothing\}+1.\]
The sequence \((\beta_{n})_{n\in\omega}\) is constructed. Let \(\beta=\sup\{\beta_{n}\,:\,n\in\omega\}\). Let us check \((*)\) in \(\mathrm{(U_{s})}\) definition. There is \(C\in\mathcal{C}\) for which \(x_{\beta}\in C\). Let us put
\[\mathcal{S}^{\prime} =\{S\in\mathcal{S}\,:\,C\subset S\}=\{S_{n}^{\prime}\,:\,n\in \omega\},\] \[\mathcal{F}^{\prime} =\{F\in\mathcal{F}_{\beta}^{*}\,:\,x_{\beta}\in F\}=\{F_{n}^{ \prime}\,:\,n\in\omega\}.\]
Let \(n\in\omega\). Let us put
\[S_{n} =\bigcap_{i=0}^{n}S_{i}^{\prime}, F_{n} =\bigcap_{i=0}^{n}F_{i}^{\prime},\] \[\alpha_{n} =\alpha_{S_{n},F_{n}}, y_{n} =x_{\alpha_{n}}.\]
Since \(\mathcal{F}_{\beta}^{*}=\bigcap_{m\in\omega}\mathcal{F}_{\beta_{m}}^{*}\), then \(F_{n}\in\mathcal{F}_{\beta_{m}}^{*}\) for some \(m\in\omega\). Since \(\beta\in A_{S_{n},F_{n}}\neq\varnothing\), then \(\alpha_{n}=\alpha_{S_{n},F_{n}}\leq\beta_{m+1}<\beta\). Hence \(y_{n}\in X_{\beta}\).
It follows from the definition of the families \(\mathcal{S}\) and \(\mathcal{C}\) that the sequence \((y_{n})_{n\in\omega}\) accumulates to some point \(y\in C\cap\bigcap_{n\in\omega}F_{n}\). Since \((y_{n})_{n\in\omega}\subset X_{\beta}\), then \(y\in\overline{X_{\beta}}\).
Let \(F\in\mathcal{F}_{\gamma}\) for \(\gamma<\beta\) and \(x_{\beta}\in F\). Then \(F\in\mathcal{F}^{\prime}\) and \(F=F_{m}^{\prime}\supset F_{m}\) for some \(m\in\omega\). Hence,
\[y\in\bigcap_{n\in\omega}F_{n}\subset F_{m}\subset F.\]
**Proposition 9**.: _Let \(X\) be a regular space with a separately continuous Mal'tsev operation and let \(X\) satisfy \(\mathrm{(U_{s})}\). Then \(X\) is an \(\omega\)-cellular space._
Proof.: Let us assume the opposite. Then there is a family \(\{K_{\alpha}^{\prime}\,:\,\alpha<\omega_{1}\}\) of non-empty sets of type \(G_{\delta}\), such that
\[K_{\beta}^{\prime}\not\subset\overline{\bigcup_{\alpha<\beta}K_{\alpha}^{ \prime}}\]
for \(\beta<\omega_{1}\). Let us choose
\[x_{\beta}\in K_{\beta}^{\prime}\setminus\overline{\bigcup_{\alpha<\beta}K_{ \alpha}^{\prime}}\]
and a sequence \((U_{\beta,n})_{n\in\omega}\) of open sets \(X\), such that
\[x_{\beta}\in U_{n+1}\subset\overline{U_{n+1}}\subset U_{n}\]
for \(n\in\omega\). Then
\[x_{\beta}\in K_{\beta}=\bigcap_{n\in\omega}U_{\beta,n}\subset K^{\prime}_{\beta}\]
And
\[x_{\beta}\notin\overline{\bigcup_{\alpha<\beta}K_{\alpha}}. \tag{1}\]
For \(\alpha,\gamma<\omega_{1}\) we put
\[h_{\alpha,\gamma}:X\to X,\ x\mapsto M(x,x_{\alpha},x_{\gamma}).\]
Note that
\[h_{\alpha,\gamma}(x_{\alpha})=x_{\gamma}, h_{\alpha,\alpha}=\operatorname{id}_{X}.\]
Let us put
\[\mathcal{P}_{\beta} =\{h_{\alpha,\gamma}^{-1}(X\setminus U_{\gamma,n})\,:\,\alpha, \gamma<\beta\text{ and }n<\omega\},\] \[\mathcal{F}_{\beta} =\mathcal{P}_{\beta+1}.\]
Since the condition (\(\operatorname{U_{s}}\)) is satisfied for \(X\), then there exists \(\beta<\omega_{1}\) and
\[y\in\overline{\{x_{\alpha}\,:\,\alpha<\beta\}},\]
such that if \(\gamma<\beta\), \(F\in\mathcal{F}_{\gamma}\) and \(x_{\beta}\in F\), then \(y\in F\). Then
\[\text{if }x_{\beta}\in F\in\mathcal{P}_{\beta},\text{ then }y\in F. \tag{2}\]
Let us put
\[y_{\gamma}=M(x_{\beta},y,x_{\gamma})\]
for \(\gamma<\beta\).
**Claim.**\(y_{\gamma}\in K_{\gamma}\).
Proof.: Suppose \(y_{\gamma}\notin K_{\gamma}\). Then \(y_{\gamma}\notin U_{\gamma,n}\) for some \(n\in\omega\). Let us put
\[U_{1} =\{x\in X\,:\,M(y,x,x_{\gamma})\in U_{\gamma,n+1}\},\] \[U_{2} =\{x\in X\,:\,M(x_{\beta},x,x_{\gamma})\in X\setminus\overline{U _{\gamma,n+1}}\}.\]
The sets \(U_{1}\) and \(U_{2}\) are open. Because
\[x_{\gamma}=M(y,y,x_{\gamma})\in U_{\gamma,n+1},\] \[y_{\gamma}=M(x_{\beta},y,x_{\gamma})\notin U_{\gamma,n}\supset \overline{U_{\gamma,n+1}},\]
then \(y\in U_{1}\cap U_{2}\). The set \(U_{1}\cap U_{2}\) is an open neighborhood of \(y\). Since \(y\in\overline{\{x_{\alpha}\,:\,\alpha<\beta\}}\), then \(x_{\alpha}\in U_{1}\cap U_{2}\) for some \(\alpha<\beta\). Then
\[h_{\alpha,\gamma}(y) =M(y,x_{\alpha},x_{\gamma})\in U_{\gamma,n+1},\] \[h_{\alpha,\gamma}(x_{\beta}) =M(x_{\beta},x_{\alpha},x_{\gamma})\in X\setminus\overline{U_{ \gamma,n+1}}\subset X\setminus U_{\gamma,n+1}.\]
Hence
\[x_{\beta} \in h_{\alpha,\gamma}^{-1}(X\setminus U_{\gamma,n+1})=F\in\mathcal{P }_{\beta},\] \[y \in h_{\alpha,\gamma}^{-1}(U_{\gamma,n+1})=X\setminus F.\]
Contradiction with (2).
Let us put
\[h:X\to X,\ x\mapsto M(x_{\beta},y,x).\]
Since \(h\) is continuous, \(y_{\gamma}=h(x_{\gamma})\) for \(\gamma<\beta\) and \(y\in\overline{\{x_{\gamma}\,:\,\gamma<\beta\}}\), then
\[x_{\beta}=M(x_{\beta},y,y)=h(y)\in\overline{h(\{x_{\gamma}\,:\,\gamma<\beta\} )}=\overline{\{y_{\gamma}\,:\,\gamma<\beta\}}.\]
It follows from the claim that
\[x_{\beta}\in\overline{\bigcup_{\gamma<\beta}K_{\gamma}}.\]
Contradiction with (1).
Propositions 8 and 9 imply the following sentence.
**Theorem 7**.: _Let \(X\) be a regular \(\Sigma\)-space with countable extend and separately continuous Mal'tsev operation. Then \(X\) is an \(\omega\)-cellular space._
**Corollary 2**.: _Let \(X\) be a regular (closely \(\sigma\)-)countably compact space with separately continuous Mal'tsev operation. Then \(X\) is an \(\omega\)-cellular space._
## 6 ccc in groups
Since the Mal'tsev operation \(M_{G}(x,y,z)=xy^{-1}z\) on a quasitopological group is separately continuous, then Theorem 7 implies the following assertion.
**Theorem 8**.: _Let \(G\) be a regular \(\Sigma\) quasitopological group with countable extend. Then \(G\) is an \(\omega\)-cellular space._
**Corollary 3**.: _Let \(G\) be a regular quasitopological group. If any of the following conditions is satisfied for \(G\), then \(G\) is an \(\omega\)-cellular space:_
1. \(G\) _is closely_ \(\sigma\)_-countably compact;_
2. \(G\) _is a Lindelof_ \(\Sigma\)_-space;_
3. \(G\) _is_ \(\sigma\)_-compact._
**Theorem 9**.: _Let \(G\) be a regular almost paratopological group and \(G^{2}\) be a \(\Sigma\)-space with countable extend. Then \(G\) is an \(\omega\)-cellular space._
Proof.: The Proposition 6 implies that \(\operatorname{Sym}G\) embeds closed in \(G^{2}\) and is a quasitopological group. Hence \(\operatorname{Sym}G\) is a regular \(\Sigma\) quasitopological group with countable extend. Theorem 8 implies that \(\operatorname{Sym}G\) is \(\omega\)-cellular. Since \(G\) is a continuous image of \(\operatorname{Sym}G\), then \(G\) is \(\omega\)-cellular.
Since the square of a Lindelof \(\Sigma\)-space is a Lindelof \(\Sigma\)-space, then the Theorem 9 implies the following assertion.
**Theorem 10**.: _Let \(G\) be a regular Lindelof \(\Sigma\) almost paratopological group. Then \(G\) is an \(\omega\)-cellular space._
**Corollary 4**.: _Let \(G\) be a regular almost paratopological group. If any of the following conditions is satisfied for \(G\), then \(G\) is an \(\omega\)-cellular space:_
1. \(G^{2}\) _is closely_ \(\sigma\)_-countably compact;_
2. \(G\) _is a Lindelof_ \(\Sigma\)_-space;_
3. \(G\) _is_ \(\sigma\)_-compact._
**Question 1**.: Let \(G\) be a semitopological group. Which of the following conditions imply that \(G\) is an \(\omega\)-cellular space (\(G\) is ccc)?
1. \(G\) is a \(\sigma\)-countably compact (regular) (almost paratopological, paratopological, quasitopological) group;
2. \(G\) is a closely \(\sigma\)-countably compact regular (almost paratopological, paratopological) group;
3. \(G\) is a (closely \(\sigma\)-)countably compact (almost paratopological, paratopological) group;
4. \(G\) is a \(\sigma\)-compact (almost paratopological, paratopological, quasitopological) group;
5. \(G\) is a Lindelof \(\Sigma\) group;
6. \(G\) is a \(\sigma\)-compact regular group.
The author thanks the referee for useful comments.
| Almost paratopological群のクラスが導入され、それは(1)
包含的なparatopological群とHausdorff quasitopological群を含み、(2)
結合される製品の下で近接し、(3)
準群である。Almost paratopological $T_1$群$G$は、
$(x,y) \in G^2: xy=e$が $G^2$内で閉集合であるという事実によって特徴づけられる。
コンパクトなAlmost paratopological群は、
topologicalである。
カントル regular $\Sigma$-space
with a countable extend and a separately continuous Mal'tsev operation
is $\omega$-cellular (and ccc)。
$\sigma$-compact regular almost paratopological group is ccc。
特に、$\sigma$-compact regular quasitopological group
is ccc。 |
2301.12293 | ACL-Fig: A Dataset for Scientific Figure Classification | Most existing large-scale academic search engines are built to retrieve
text-based information. However, there are no large-scale retrieval services
for scientific figures and tables. One challenge for such services is
understanding scientific figures' semantics, such as their types and purposes.
A key obstacle is the need for datasets containing annotated scientific figures
and tables, which can then be used for classification, question-answering, and
auto-captioning. Here, we develop a pipeline that extracts figures and tables
from the scientific literature and a deep-learning-based framework that
classifies scientific figures using visual features. Using this pipeline, we
built the first large-scale automatically annotated corpus, ACL-Fig, consisting
of 112,052 scientific figures extracted from ~56K research papers in the ACL
Anthology. The ACL-Fig-Pilot dataset contains 1,671 manually labeled scientific
figures belonging to 19 categories. The dataset is accessible at
https://huggingface.co/datasets/citeseerx/ACL-fig under a CC BY-NC license. | Zeba Karishma, Shaurya Rohatgi, Kavya Shrinivas Puranik, Jian Wu, C. Lee Giles | 2023-01-28T20:27:35 | http://arxiv.org/abs/2301.12293v1 | # ACL-Fig: A Dataset for Scientific Figure Classification
###### Abstract
Most existing large-scale academic search engines are built to retrieve text-based information. However, there are no large-scale retrieval services for scientific figures and tables. One challenge for such services is understanding scientific figures' semantics, such as their types and purposes. A key obstacle is the need for datasets containing annotated scientific figures and tables, which can then be used for classification, question-answering, and auto-captioning. Here, we develop a pipeline that extracts figures and tables from the scientific literature and a deep-learning-based framework that classifies scientific figures using visual features. Using this pipeline, we built the first large-scale automatically annotated corpus, ACL-Fig consisting of 112,052 scientific figures extracted from \(\approx 56\)K research papers in the ACL Anthology. The ACL-Fig-pilot dataset contains 1,671 manually labeled scientific figures belonging to 19 categories. The dataset is accessible at [https://huggingface.co/datasets/citeseerx/ACL-fig](https://huggingface.co/datasets/citeseerx/ACL-fig) under a CC BY-NC license.
## 1 Introduction
Figures are ubiquitous in scientific papers illustrating experimental and analytical results. We refer to these figures as _scientific figures_ to distinguish them from natural images, which usually contain richer colors and gradients. Scientific figures provide a compact way to present numerical and categorical data, often facilitating researchers in drawing insights and conclusions. Machine understanding of scientific figures can assist in developing effective retrieval systems from the hundreds of millions of scientific papers readily available on the Web [1]. The state-of-the-art machine learning models can parse captions and shallow semantics for specific categories of scientific figures. [2] However, the task of reliably classifying general scientific figures based on their visual features remains a challenge.
Here, we propose a pipeline to build categorized and contextualized scientific figure datasets. Applying the pipeline on 55,760 papers in the ACL Anthology (downloaded from [https://aclanthology.org/](https://aclanthology.org/) in mid-2021), we built two datasets: ACL-Fig and ACL-Fig-pilot. ACL-Fig consists of 112,052 scientific figures, their captions, inline references, and metadata. ACL-Fig-pilot (Figure 1) is a subset of unlabeled ACL-Fig, consisting of 1671 scientific figures, which were manually labeled into 19 categories. The ACL-Fig-pilot dataset was used as a benchmark for scientific figure classification. The pipeline is open-source and configurable, enabling others to expand the datasets from other scholarly datasets with pre-defined or new labels.
## 2 Related Work
Scientific Figures ExtractionAutomatically extracting figures from scientific papers is essential for many downstream tasks, and many frameworks have been developed. A multi-entity extraction framework called PDFMEF incorporating a figure extraction module was proposed [3]. Shared tasks such as ImageCLEF [4] drew attention to compound figure detection and separation. [5] proposed a framework called PDFFigures that extracted figures and
captions in research papers. The authors extended their work and built a more robust framework called PDFFigures2 [6]. DeepFigures was later proposed to incorporate deep neural network models [2].
Scientific Figure ClassificationScientific figure classification [7; 8] aids machines in understanding figures. Early work used a visual bag-of-words representation with a support vector machine classifier [7]. [9] applied hough transforms to recognize bar charts in document images. [10] used handcrafted features to classify charts in scientific documents. [11] combined convolutional neural networks (CNNs) and the deep belief networks, which showed improved performance compared with feature-based classifiers.
Figure classification DatasetsThere are several existing datasets for figure classification such as DocFigure [12], FigureSeer [10], Revision [7], and datasets presented by [13] (Table 1). FigureQA is a public dataset that is similar to ours, consisting of over one million question-answer pairs grounded in over 100,000 synthesized scientific images [14] with five styles. Our dataset is different from FigureQA because the figures were directly extracted from research papers. Especially, the training data of DeepFigures are from arXiv and PubMed, labeled with only "figure" and "table", and does not include fine-granular labels. Our dataset contains fine-granular labels, inline context, and is compiled from a different domain.
\begin{table}
\begin{tabular}{l|l|l|l} \hline \hline Dataset & **Labels** & **\#Figures** & **Image Source** \\ \hline Deepchart & 5 & 5,000 & Web Image \\ Figreseer1 & 5 & 30,600 & Web Image \\ Prasad et al. & 5 & 653 & Web Image \\ Revision & 10 & 2,000 & Web Image \\ FigureQA3 & 5 & 100,000 & Synthetic figures \\ \hline DeepFigures & 2 & 1,718,000 & Scientific Papers \\ DocFigure2 & 28 & 33,000 & Scientific Papers \\ ACL-Fio-pilot & **19** & **1,671** & Scientific Papers \\ ACL-Fio (inferred)4 & - & **112,052** & Scientific Papers \\ \hline \hline \end{tabular}
* Only 1000 images are public.
* Not publicly available.
* Scientific-style synthesized data.
* ACL-Fio-inferred does not contain human-assigned labels.
\end{table}
Table 1: Scientific figure classification datasets.
Figure 1: Example figures of each type in ACL-Fio-pilot.
## 3 Data Mining Methodology
The ACL Anthology is a sizable, well-maintained PDF corpus with clean metadata covering papers in computational linguistics with freely available full-text. Previous work on figure classification used a set of pre-defined categories (e.g., [14], which may only cover some figure types. We use an unsupervised method to determine figure categories to overcome this limitation. After the category label is assigned, each figure is automatically annotated with metadata, captions, and inline references. The pipeline includes 3 steps: figure extraction, clustering, and automatic annotation (Figure 2).
### Figure Extraction
To mitigate the potential bias of a single figure extractor, we extracted figures using pdffigures2[6] and deepfigures[2] which work in different ways. PDFFigures2 first identifies captions and the body text because they are identified relatively accurately. Regions containing figures can then be located by identifying rectangular bounding boxes adjacent to captions that do not overlap with the body text. DeepFigures uses the distant supervised learning method to induce labels of figures from a large collection of scientific documents in LaTeX and XML format. The model is based on TensorFlow, applying the Overfeat detection architecture to image embeddings generated using ResNet-101 [2]. We utilized the publicly available model weights1 trained on 4M induced figures and 1M induced tables for extraction. The model outputs the bounding boxes of figures and tables. Unless otherwise stated, we collectively refer to figures and tables together as "figures". We used multi-processing to process PDFs. Each process extracts figures following the steps below. The system processed, on average, 200 papers per minute on a Linux server with 24 cores.
Footnote 1: [https://github.com/allenai/deepfigures-open](https://github.com/allenai/deepfigures-open)
1. Retrieve a paper identifier from the job queue.
2. Pull the paper from the file system.
3. Extract figures and captions from the paper.
4. Crop the figures out of the rendered PDFs using detected bounding boxes.
5. Save cropped figures in PNG format and the metadata in JSON format.
Figure 2: Overview of the data generation pipeline.
### Clustering Methods
Next, we use an unsupervised method to label extracted figures automatically. We extract visual features using VGG16 [15], pretrained on ImageNet [16]. All input figures are scaled to a dimension of \(224\times 224\) to be compatible with the input requirement of VGG16. The features were extracted from the second last hidden (dense) layer, consisting of 4096 features. Principal Component Analysis was adopted to reduce the dimension to 1000.
Next, we cluster figures represented by the 1000-dimension vectors using \(k\)-means clustering. We compare two heuristic methods to determine the optimal number of clusters, including the Elbow method and the Silhouette Analysis [17]. The Elbow method examines the _explained variation_, a measure that quantifies the difference between the between-group variance to the total variance, as a function of the number of clusters. The pivot point (elbow) of the curve determines the number of clusters.
Silhouette Analysis determines the number of clusters by measuring the distance between clusters. It considers multiple factors such as variance, skewness, and high-low differences and is usually preferred to the Elbow method. The Silhouette plot displays how close each point in one cluster is to points in the neighboring clusters, allowing us to assess the cluster number visually.
### Linking Figures to Metadata
This module associates figures to metadata, including captions, inline reference, figure type, figure boundary coordinates, caption boundary coordinates, and figure text (text appearing on figures, only available for results from PDFFigures2). The figure type is determined in the clustering step above. The inline references are obtained using GROBID (see below). The other metadata fields were output by figure extractors. PDFFigures2 and DeepFigures extract the same metadata fields except for "image text" and "regionless captions" (captions for which no figure regions were found), which are only available for results of PDFFigures2.
An inline reference is a text span that contains a reference to a figure or a table. inline references can help to understand the relationship between text and the objects it refers to. After processing a paper, GROBID outputs a TEI file (a type of XML file), containing marked-up full-text and references. We locate inline references using regular expressions and extract the sentences containing reference marks.
## 4 Results
### Figure Extraction
The numbers of figures extracted by PDFFigures2 and DeepFigures are illustrated in Figure 3, which indicates a significant overlap between figures extracted by two software packages. However, either package extracted (\(\approx 5\%\)) figures that were not extracted by the other package. By inspecting a random sample of figures extracted by either software package, we found that DeepFigures tended to miss cases in which two figures were vertically adjacent to each other. We took the union of all figures extracted by both software packages to build the ACL-Fig dataset, which contains a total of 263,952 figures. All images extracted are converted to 100 DPI using standard OpenCV libraries. The total size of the data is \(\sim 25\)GB before compression. inline references were extracted using GROBID. About 78% figures have inline references.
### Automatic Figure Annotation
The extraction outputs 151,900 tables and 112,052 figures. Only the figures were clustered using the \(k\)-means algorithm. We varied \(k\) from 2 to 20 with an increment of 1 to determine the number of clusters. The results were analyzed using
Figure 3: Numbers of extracted images.
the Elbow method and Silhouette Analysis. No evident elbow was observed in the Elbow method curve. The Silhouette diagram, a plot of the number of clusters versus silhouette score exhibited a clear turning point at \(k=15\), where the score reached the global maximum. Therefore, we grouped the figures into 15 clusters.
To validate the clustering results, 100 figures randomly sampled from each cluster were visually inspected. During the inspection, we identified three new figure types: _word cloud_, _pareto_, and _venn diagram_. The ACL-Fig-pilot dataset was then built using all manually inspected figures. Two annotators manually labeled and inspected these clusters. The consensus rate was measured using Cohen's Kappa coefficient, which was \(\kappa-0.78\) (substantial agreement) for the ACL-Fig-pilot dataset. For completeness, we added 100 randomly selected tables. Therefore, the ACL-Fig-pilot dataset contains a total of 1671 figures and tables labeled with 19 classes. The distribution of all classes is shown in Figure 4.
## 5 Supervised Scientific Figure Classification
Based on the ACL-Fig-pilot dataset, we train supervised classifiers. The dataset was split into a training and a test set (8:2 ratio). Three baseline models were investigated. Model 1 is a 3-Layer CNN, trained with a categorical cross-entropy loss function and the Adam optimizer. The model contains three typical convolutional layers, each followed by a max-pooling and a drop-out layer, and three fully-connected layers. The dimensions are reduced from \(32\times 32\) to \(16\times 16\) to \(8\times 8\). The last fully connected layer classifies the encoded vector into 19 classes. This classifier achieves an accuracy of 59%.
Model 2 was trained based on the VGG16 architecture,except that the last three fully-connected layers in the original network were replaced by a long short-term memory layer, followed by dense layers for classification. This model achieved an accuracy of \(\sim 79\%\), 20% higher than Model 1.
Model 3 was the Vision Transformer (ViT) [18], in which a figure was split into fixed-size patches. Each patch was then linearly embedded, supplemented by position embeddings. The resulting sequence of vectors was fed to a standard Transformer encoder. The ViT model achieved the best performance, with 83% accuracy.
## 6 Conclusion
Based on the ACL Anthology papers, we designed a pipeline and used it to build a corpus of automatically labeled scientific figures with associated metadata and context information. This corpus, named ACL-Fig, consists of \(\approx 250\)
Figure 4: Figure class distribution in the ACL-Fig-pilot dataset.
objects, of which about 42% are figures and about 58% are tables. We also built ACL-Fig-pilot, a subset of ACL-Fig, consisting of 1671 scientific figures with 19 manually verified labels. Our dataset includes figures extracted from real-world data and contains more classes than existing datasets, e.g., DeepFigures and FigureQA.
One limitation of our pipeline is that it used VGG16 pre-trained on ImageNet. In the future, we will improve figure representation by retraining more sophisticated models, e.g., CoCa, [19], on scientific figures. Another limitation was that determining the number of clusters required visual inspection. We will consider density-based methods to fully automate the clustering module.
| 既存の大規模な学術検索エンジンは、テキストベースの情報のみを検索するように設計されています。しかし、大規模な検索サービスは、科学的な図表や表には対応していません。このようなサービスの課題としては、科学的な図表のsemanticsの理解、例えばその種類と目的が挙げられます。重要な障害は、図表と表を含むアノテーションされたデータセットの需要です。このデータセットは、分類、質問応答、自動Captioningに利用されます。そこで、本稿では、科学文献から図表と表を抽出するpipelineと、視覚的特徴を用いて科学的な図表を分類するための深度学習ベースのフレームワークを開発しました。このpipelineを用いて、私たちは、ACLAnthologyにおける56,000以上の研究論文から抽出された112,052の科学的な図表を含む、初の自動アノテーションされたコーパス、ACL-Figを構築しました。ACL-Fig-Pilot |
2303.04288 | Polynomial Time and Private Learning of Unbounded Gaussian Mixture
Models | We study the problem of privately estimating the parameters of
$d$-dimensional Gaussian Mixture Models (GMMs) with $k$ components. For this,
we develop a technique to reduce the problem to its non-private counterpart.
This allows us to privatize existing non-private algorithms in a blackbox
manner, while incurring only a small overhead in the sample complexity and
running time. As the main application of our framework, we develop an
$(\varepsilon, \delta)$-differentially private algorithm to learn GMMs using
the non-private algorithm of Moitra and Valiant [MV10] as a blackbox.
Consequently, this gives the first sample complexity upper bound and first
polynomial time algorithm for privately learning GMMs without any boundedness
assumptions on the parameters. As part of our analysis, we prove a tight (up to
a constant factor) lower bound on the total variation distance of
high-dimensional Gaussians which can be of independent interest. | Jamil Arbas, Hassan Ashtiani, Christopher Liaw | 2023-03-07T23:24:27 | http://arxiv.org/abs/2303.04288v2 | # Polynomial Time and Private Learning of Unbounded Gaussian Mixture Models
###### Abstract
We study the problem of privately estimating the parameters of \(d\)-dimensional Gaussian Mixture Models (GMMs) with \(k\) components. For this, we develop a technique to reduce the problem to its non-private counterpart. This allows us to privatize existing non-private algorithms in a blackbox manner, while incurring only a small overhead in the sample complexity and running time. As the main application of our framework, we develop an \((\varepsilon,\delta)\)-differentially private algorithm to learn GMMs using the non-private algorithm of Moitra and Valiant [14] as a blackbox. Consequently, this gives the first sample complexity upper bound and first polynomial time algorithm for privately learning GMMs without any boundedness assumptions on the parameters.
## 1 Introduction
The problem of learning the parameters of a Gaussian Mixture Model (GMM) is a fundamental problem in statistics, dating back to the early work of Karl Pearson in 1894 [10]. A GMM with \(k\) components in \(d\) dimensions can be represented as \((w_{i},\mu_{i},\Sigma_{i})_{i=1}^{k}\), where \(w_{i}\) is a mixing weight (\(w_{i}\geq 0\), and \(\sum_{i\in[k]}w_{i}=1\)), \(\mu_{i}\in\mathbb{R}^{d}\) is a mean, and \(\Sigma_{i}\in\mathbb{R}^{d\times d}\) is a covariance matrix (of the \(i\)-th Gaussian component). To draw a random instance from this GMM, one first samples an index \(i\in[k]\) (with probability \(w_{i}\)) and then returns a random sample from the Gaussian distribution \(\mathcal{N}(\mu_{i},\Sigma_{i})\). In this work we consider the problem of parameter estimation in the probably approximately correct (PAC) model, where the goal is to "approximately recover"1 the parameters of an unknown GMM given only independent samples from it.
Footnote 1: See Definition 1.4 for the precise notion of distance.
The sample complexity and computational complexity of learning the parameters of GMMs has been studied extensively. A notable breakthrough in this line of work was the development of polynomial-time methods (with respect to \(d\)) for learning GMMs [14, 2]. The running time and sample complexity of these methods is exponential \(k\), which is necessary for parameter estimation [14].
The above approaches, however, may not maintain privacy of the individuals whose data has been used for the estimation. To address this issue, we adopt the rigorous and widely accepted notion of differential privacy (DP) [13]. At a high-level, DP ensures that the contribution of each individual has only a small (indistinguishable) effect on the output of the estimator. The classical notion of \(\varepsilon\)-DP (pure DP) is, however, quite restrictive. For instance, even estimating the mean of an unbounded univariate Gaussian random variable in this model is impossible. Therefore, in line with recent work on private estimation in unbounded domains, we consider the \((\varepsilon,\delta)\)-DP (i.e. approximate differential privacy [1]) model.
For the simpler case of multivariate Gaussians (without any boundedness assumptions on the parameters), it has been shown that learning with a finite number of samples is possible in the \((\varepsilon,\delta)\)-DP model [1]. More recently, computationally efficient estimators have been devised for the same task [1, 2, 3]. This begs answering the corresponding question for GMMs.
Is there an \((\varepsilon,\delta)\)-DP estimator with a bounded sample complexity for learning unbounded GMMs? Is there a polynomial time estimator (in terms of \(d\)) for the same task?
Note that if additional boundedness2 and strong separation3 assumptions are made about the GMM, then the work of [3] offers a positive answer to the above question in the \(\varepsilon\)-DP model. Our aim is, however, learning unbounded GMMs and also with minimal separation assumptions.
Footnote 2: They assume there are known quantities \(R,\sigma_{max},\sigma_{min}\) such that \(\forall i\in[k],\|\mu_{i}\|_{2}\leq R\) and \(\sigma_{min}^{2}\leq||\Sigma_{i}||\leq\sigma_{max}^{2}\).
Footnote 3: They assume \(\forall i\neq j,||\mu_{i}-\mu_{j}||_{2}\geq\widehat{\Omega}\left(\sqrt{k}+ \sqrt{\frac{1}{w_{i}}+\frac{1}{w_{j}}}\right)\cdot\max\left\{||\Sigma_{i}^{1/ 2}||,||\Sigma_{j}^{1/2}||\right\}\).
To approach this problem, it is natural to ask if there is a general reduction from the private learning of GMMs to its non-private counterpart. If so, this would enable us to easily reuse existing results for non-private learning of GMMs.
Is there a reduction from private to non-private learning of GMMs that incurs only a polynomial time and polynomial sample overhead?
The main result of this paper is the existence of such a reduction; see Theorem 6.2 for a rigorous version.
**Theorem 1.1** (**Private to Non-private Reduction for GMMs, Informal**).: _There is a reduction from learning the parameters of a GMM in the \((\varepsilon,\delta)\)-DP model to its non-private counterpart. Moreover, this reduction adds only polynomial time and sample overhead in terms of \(d\) and \(k\)._
This reduction, along with the non-private learner of [14] gives the first finite sample complexity upper bound for learning the parameters of unbounded GMMs in the \((\varepsilon,\delta)\)-DP model. Moreover, the resulting estimator essentially inherits all the properties of the non-private estimator of [14]; it runs in time that is polynomial in \(d\) (for fixed \(k\)) and shares the advantage of requiring provably minimal separability assumptions on the components of the GMM.
Concurrent work.In an independent work, [11] offer an \((\varepsilon,\delta)\)-DP method for learning GMMs, removing the boundedness and strong separation requirements of [3]. However, they assume Gaussian components are spherical. We do not make that assumption, and learn the covariance matrices as well.
### Related Work
Private Learning of a Single Gaussian.Karwa and Vadhan [10] established polynomial time and sample efficient methods for learning the mean and variance of a univariate Gaussian in both the pure and approximate-DP settings. Namely, in the \((\varepsilon,\delta)\)-DP setting, they can recover the mean and variance of the Gaussian without any boundedness assumption on the parameters. This result can be generalized to the multivariate setting [1, 2], where one finds Gaussians that approximate the underlying Gaussian in terms of total variation distance. However, the sample complexity of these methods depends on the condition number of the covariance matrix, and requires a priori bounds on the range of the parameters. The first finite sample complexity bound for privately learning unbounded Gaussians appeared in [1], nearly matching the sample complexity lower bound of [3]. The work of [1] relies on
a private version of the minimum distance estimator [23] and is based on ideas from the private hypothesis selection method [16]. However, this method is not computationally efficient. Recently, several papers offered \((\varepsilon,\delta)\)-DP and computationally efficient algorithms for learning unbounded Gaussians [1, 15, 14], where the work of [1] achieved a near-optimal sample complexity for this task. Part of the approach of [1] is a sub-sample-and-aggregate scheme which we modify and use in this paper. FriendlyCore [13] is an alternative sample-and-aggregate framework that can be used for privately learning unbounded Gaussians. It is noteworthy that the approaches of [1, 12] work in the robust setting as well albeit with sub-optimal sample complexities. The recent work of [1] offers a robust and private learner with near-optimal sample requirements in terms of dimension. Finally, [11] ticks all the boxes by offering a sample near-optimal, robust, and efficient learner for unbounded Gaussians.
Another related result is a sample-efficient and computationally efficient method for learning bounded and high-dimensional Gaussians in the \(\varepsilon\)-DP model [11]. There is also work on the problem of private mean estimation with respect to Mahalanobis distance [1, 12]. Finding private and robust estimators [13] and also the interplay between robustness and privacy [1, 1, 1, 15, 16] are subjects of a few recent papers.
Parameter Learning for GMMs with PAC Guarantees.Given i.i.d. samples from a GMM, can we approximately recover its parameters? There has been an extensive amount of research in developing sample efficient and computationally efficient methods for learning the parameters of a GMM [1, 1, 1, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 222, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 287, 288, 289, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 323, 334, 335, 35, 36, 37, 38, 39, 31, 33, 34, 36, 38, 39, 32, 33, 34, 36, 39, 33, 35, 37, 38, 39, 39, 30, 31, 32, 33, 34, 36, 39, 33, 36, 39, 31, 33, 32, 35, 37, 38, 39, 32, 36, 39, 33, 37, 39, 38, 39, 39, 31, 32, 33, 34, 36, 39, 32, 35, 38, 39, 33, 36, 39, 37, 38, 39, 31, 33, 34, 36, 39, 32, 37, 39, 33, 38, 39, 32, 39, 33, 34, 35, 36, 39, 37, 38, 39, 39, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 42, 44, 49, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 74, 75, 76, 77, 78, 79, 80, 81, 82, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 84, 85, 86, 87, 88, 89, 91, 80, 83, 84, 85, 87, 89, 80, 84, 86, 88, 89, 80, 85, 87, 88, 89, 81, 82, 84, 85, 86, 88, 87, 89, 82, 88, 89, 80, 83, 84, 85, 88, 89, 80, 84, 86, 89, 81, 83, 85, 87, 88, 89, 80, 84, 88, 89, 82, 85, 86, 87, 88, 89, 82, 89, 83, 84, 85, 88, 89, 80, 86, 87, 88, 88, 89, 80, 87, 88, 89, 80, 88, 89, 81, 82, 83, 84, 85, 86, 88, 89, 82, 87, 88, 88, 89, 80, 88, 89, 82, 89, 83, 84, 85, 86, 87, 88, 89, 80, 88, 89, 81, 84, 86, 88, 89, 82, 85, 87, 88, 82, 86, 89, 83, 88, 89, 80, 84, 88, 85, 89, 86, 87, 88, 89, 82, 89, 83, 84, 86, 88, 87, 89, 80, 88, 85, 89, 80, 86, 87, 88, 89, 82, 89, 80, 88, 89, 80, 89, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 82, 89, 80, 83, 84, 87, 88, 85, 89, 86, 88, 89, 80, 87, 88, 89, 82, 89, 83, 85, 89, 80, 86, 88, 89, 80, 87, 88, 89, 80, 88, 89, 80, 89, 81, 80, 82, 83, 84, 85, 86, 87, 88, 89, 82, 83, 88, 89, 80, 84, 88, 85, 89, 86, 87, 88, 88, 89, 80, 88, 89, 82, 89, 80, 83, 84, 86, 88, 89, 85, 87, 88, 89, 80, 88, 89, 80, 89, 82, 89, 80, 81, 83, 84, 85, 86, 87, 88, 89, 82, 83, 85, 88, 89, 83, 86, 89, 84, 87, 88, 89, 85, 89, 86, 89, 87, 88, 89, 80, 89, 80, 88, 89, 82, 89, 80, 89, 80, 89, 80, 81, 82, 83, 84, 86, 88, 89, 80, 81, 82, 84, 85, 86, 87, 88, 89, 83, 82, 85, 87, 89, 84, 88, 89, 85, 86, 89, 87, 88, 89, 80, 89, 80, 82, 89, 80, 83, 8
### Preliminaries
We use \(\|v\|_{2}\) to denote the Euclidean norm of a vector \(v\in\mathbb{R}^{d}\) and \(\|A\|_{F}\) (resp. \(\|A\|\)) to denote the Frobenius (resp. spectral) norm of a matrix \(A\in\mathbb{R}^{d\times d}\).
In this paper, we write \(\mathcal{S}^{d}\) to denote the positive-definite cone in \(\mathbb{R}^{d\times d}\). Let \(\mathcal{G}(d)=\{\mathcal{N}(\mu,\Sigma)\,:\,\mu\in\mathbb{R}^{d},\Sigma\in \mathcal{S}^{d}\}\) be the family of \(d\)-dimensional Gaussians. We can now define the class \(\mathcal{G}(d,k)\) of mixtures of Gaussians as follows.
**Definition 1.2** (Gaussian Mixtures).: The class of mixtures of \(k\) Guassians in \(\mathbb{R}^{d}\) is defined by \(\mathcal{G}(d,k)\coloneqq\left\{\sum\limits_{i=1}^{k}w_{i}G_{i}\,:\,G_{i}\in \mathcal{G}(d),w_{i}\geq 0,\sum_{i=1}^{k}w_{i}=1\right\}\).
We represent the Gaussian Mixture Model (GMM) by a set of \(k\) tuples \(\left(w_{i},\mu_{i},\Sigma_{i}\right)_{i=1}^{k}\), where each tuple represents the mean, covariance matrix, and mixing weight of one of its components. Note that the order of the components is important in our notation, since the order of the output may have an impact on the privacy.
In the following definition and the remainder of the paper, we may abuse terminology and refer to a distribution via its probability density function (p.d.f.).
**Definition 1.3** (Total Variation Distance).: Given two absolutely continuous probability measures \(f(x),g(x)\) on \(\mathbb{R}^{d}\), the total variation (TV) distance between \(f\) and \(g\) is defined as \(d_{\mathrm{TV}}\left(f(x),g(x)\right)=\frac{1}{2}\int_{\mathbb{R}^{d}}|f(x)-g (x)|\,\mathrm{d}x\).
A standard way to define the distance between two GMMs is as follows [10][Definition 2].
**Definition 1.4** (The Distance between Two GMMs).: The \(\mathrm{dist}_{\mathrm{GMM}}\) distance between two GMMs is defined by
\[\mathrm{dist}_{\mathrm{GMM}}\left(\left(w_{i},\mu_{i},\Sigma_{i}\right)_{i=1}^ {k},\left(w^{\prime}_{i},\mu^{\prime}_{i},\Sigma^{\prime}_{i}\right)_{i=1}^{k }\right)\,=\min_{\pi}\max_{i\in[k]}\max\left\{\left|w_{i}-w^{{}^{\prime}}_{ \pi(i)}\right|,d_{\mathrm{TV}}\left(\mathcal{N}(\mu_{i},\Sigma_{i}),\mathcal{ N}(\mu^{{}^{\prime}}_{\pi(i)},\Sigma^{{}^{\prime}}_{\pi(i)})\right)\,\right\}\]
where \(\pi\) is chosen from the set of all permutations over \([k]\).
If \(X\) (resp. \(Y\)) is a random variable distributed according to \(f\) (resp. \(g\)), we write \(d_{\mathrm{TV}}\left(X,Y\right)=d_{\mathrm{TV}}\left(f,g\right)\). We drop the reference to the p.d.f. of the random variable when it is clear or implicit from context.
### Differential Privacy Basics
At a high-level, an algorithm is differentially private if, given two datasets that differ only in a single element, the output distribution of the algorithm are nearly the same4.
Footnote 4: For sake of simplicity, we consider data sets to be ordered and therefore the neighboring data sets are defined based on their Hamming distances. However, one can easily translate guarantees proven for the ordered setting to the unordered one; see Proposition D.6 in [1].
**Definition 1.5** (Neighbouring Datasets).: Let \(\mathcal{X},\mathcal{Y}\) denote sets and \(n\in\mathbb{N}\). Two datasets \(D=(X_{1},\ldots,X_{n}),D^{\prime}=(X_{1},\ldots,X_{n})\in\mathcal{X}^{n}\) are said to be _neighbouring_ if \(d_{H}(D,D^{\prime})\leq 1\) where \(d_{H}\) denotes Hamming distance, i.e., \(d_{H}(D,D^{\prime})=|\{i\in[n]\,:\,X_{i}\neq X^{\prime}_{i}\}|\).
**Definition 1.6** (\((\varepsilon,\delta)\)-Indistinguishable).: Let \(D,D^{\prime}\) be two distributions defined on a set \(\mathcal{Y}\). Then \(D,D^{\prime}\) are said to be \((\varepsilon,\delta)\)-indistinguishable if for all measurable \(S\subseteq\mathcal{Y}\), \(\mathbb{P}_{Y\sim D}\left[Y\in S\right]\leq e^{\varepsilon}\mathbb{P}_{Y\sim D ^{\prime}}\left[Y\in S\right]+\delta\) and \(\mathbb{P}_{Y\sim D^{\prime}}\left[Y\in S\right]\leq e^{\varepsilon}\mathbb{P} _{Y\sim D}\left[Y\in S\right]+\delta\).
**Definition 1.7** (\((\varepsilon,\delta)\)-Differential Privacy [16]).: A randomized mechanism \(\mathcal{M}\colon\mathcal{X}^{n}\to\mathcal{Y}\) is said to be \((\varepsilon,\delta)\)-differentially private if for all neighbouring datasets \(D,D^{\prime}\in\mathcal{X}^{n}\), \(\mathcal{M}(D)\) and \(\mathcal{M}(D^{\prime})\) are \((\varepsilon,\delta)\)-indistinguishable.
### Techniques
The techniques in this paper are inspired by the techniques in [1] which are based on the Propose-Test-Release framework [14] and the Subsample-And-Aggregate framework [20]. Given a dataset \(D\), we first split \(D\) into \(t\) sub-datasets and run a non-private algorithm \(\mathcal{A}\) on each of the sub-datasets. Next, we privately check if most of the outputs of \(\mathcal{A}\) are "well-clustered" (i.e., are close to each other). If not, then the algorithm fails as this suggests that the outputs of the non-private algorithm is not very stable (either due to lack of data or simply that the non-private algorithm is sensitive to its input). On the other hand, if most of the outputs are well-clustered then we can aggregate these clustered outputs and release a noisy version of it. There are, however, multiple additional technical challenges that need to be addressed.
One core difficulty is the issue of the ordering of the Gaussian components. Namely, the non-private GMM learners may output GMM components in different orders. Therefore, aggregating these non-private solutions (e.g., by taking their weighted average in the style of [1]) seems impossible. We therefore propose to skip the aggregation step all together by simply picking an arbitrary solution from the cluster. Therefore, our private populous mean estimator (PPE) simplifies and generalizes the private populous mean estimator (PPME) framework of [1], making it applicable to general semimetric spaces (and therefore GMMs). A precise discussion of this framework is presented in Subsection 2.1.
Another challenge is designing an appropriate mechanism for adding noise to GMMs. As discussed above, our framework requires that we are able to release a noisy output of a candidate output. More precisely, given two neighbouring datasets \(Y_{1},Y_{2}\), we want to design a mechanism \(\mathcal{B}\) such that \(\mathcal{B}(Y_{1})\), \(\mathcal{B}(Y_{2})\) are indistinguishable whenever \(Y_{1},Y_{2}\) are sufficiently close. As in [1], we refer to such a mechanism as a "masking mechanism". In the context of mixture distributions with \(k\) components, a candidate output corresponds to a \(k\)-tuple where each element of the tuple contain the parameters and the mixing weight of a single component. We prove that, if one can design a masking mechanism for a _single_ component then it is possible to use this masking mechanism as a blackbox to design a masking mechanism for the \(k\)-tuple with only a \(\text{poly}(k)\) overhead in the running time. One important ingredient is that we randomly shuffle the components, making the output invariant to the order of the components.
Another challenge related to the order of components is that computing the distance between two GMMs based on Definition 1.4 requires minimizing over all permutations. A naive method for computing this distance could require exponential time but we show this task can be done in polynomial time using a simple reduction to bipartite matching.
To showcase the utility of the above framework, we show that it is straightforward to apply the framework to privately learning mixtures of Gaussians. We design a masking mechanism of a single Gaussian component which consists of mixing the weight, the mean, and the covariance matrix. Masking the mixing weight is fairly standard while masking the mean and the covariance matrix can be done using known results (e.g. by using [1][Lemma 5.2] for the covariance matrix and a similar technique for the mean).
Finally, we note that, in some of the literature for Gaussian mixtures, the results usually assert that for each Gaussian component \(\mathcal{N}(\mu,\Sigma)\), the algorithm returns \(\hat{\mu},\hat{\Sigma}\) such that \(\mathcal{N}(\mu,\Sigma)\) and \(\mathcal{N}(\hat{\mu},\hat{\Sigma})\) are close in _total variation_ distance (e.g. [13]). Our framework requires that \(\hat{\mu}\) (resp. \(\hat{\Sigma}\)) is close to \(\mu\) (resp. \(\Sigma\)) for some appropriate norm. Intuitively, this ought to be the case but no tight characterization was previously known unless the Gaussians had the same mean [13][Theorem 1.1]. In this paper, we prove the following tight characterization between the TV distance of a Gaussian and its parameters. We believe that such a result may be of independent interest.
**Theorem 1.8**.: _Let \(\mu_{1},\mu_{2}\in\mathbb{R}^{d}\) and \(\Sigma_{1},\Sigma_{2}\) be \(d\times d\) positive-definite matrices. Suppose that we have \(d_{\mathrm{TV}}(\mathcal{N}(\mu_{1},\Sigma_{1}),\mathcal{N}(\mu_{2},\Sigma_{ 2}))<\frac{1}{600}\). Let_
\[\Delta=\max\left\{\|\Sigma_{1}^{-1/2}\Sigma_{2}\Sigma_{1}^{-1/2}-I_{d}\|_{F}, \|\Sigma_{1}^{-1}(\mu_{1}-\mu_{2})\|_{2}\right\}.\]
_Then_
\[\frac{1}{200}\Delta\leq d_{\mathrm{TV}}\left(\mathcal{N}(\mu_{1},\Sigma_{1}), \mathcal{N}(\mu_{2},\Sigma_{2})\right)\leq\frac{1}{\sqrt{2}}\Delta.\]
## 2 Private Populous Estimator
In this section, we describe our main framework which we call the "private populous estimator" (PPE). Before that, we need a few definitions.
Semimetric spaces.In our application, we need to deal with distance functions which only satisfy an _approximate_ triangle inequality that hold only when the points are sufficiently close together. To that end, we first define the notion of a semimetric space.
**Definition 2.1** (Semimetric Space).: We say \((\mathcal{F},\mathrm{dist})\) is a semimetric space if for every \(F,F_{1},F_{2},F_{3}\in\mathcal{F}\), the following conditions hold.
1. **Non-negativity.**\(\mathrm{dist}(F,F)=0\); \(\mathrm{dist}(F_{1},F_{2})\geq 0\).
2. **Symmetry.**\(\mathrm{dist}(F_{1},F_{2})=\mathrm{dist}(F_{2},F_{1})\).
3. \(z\)**-approximate \(r\)-restricted triangle inequality.** Let \(r>0\) and \(z\geq 1\). If \(\mathrm{dist}(F_{1},F_{2}),\mathrm{dist}(F_{2},F_{3})\leq r\) then \(\mathrm{dist}(F_{1},F_{3})\leq z\cdot(\mathrm{dist}(F_{1},F_{2})+\mathrm{dist} (F_{2},F_{3}))\).
Masking mechanism.Intuitively, a masking mechanism \(\mathcal{B}\) is a random function that returns a noisy version of its input, with the goal of making close inputs indistinguishable. Formally, we define a masking mechanism as follows.
**Definition 2.2** (Masking Mechanism [2], Definition 3.3).: Let \((\mathcal{F},\mathrm{dist})\) be a semimetric space. A randomized function \(\mathcal{B}\colon\mathcal{F}\to\mathcal{F}\) is a \((\gamma,\varepsilon,\delta)\)-masking mechanism for \((\mathcal{F},\mathrm{dist})\) if for all \(F,F^{\prime}\in\mathcal{F}\) satisfying \(\mathrm{dist}(F,F^{\prime})\leq\gamma\), we have that \(\mathcal{B}(F),\mathcal{B}(F^{\prime})\) are \((\varepsilon,\delta)\)-indistinguishable. Further, \(\mathcal{B}\) is said to be \((\alpha,\beta)\)-concentrated if for all \(F\in\mathcal{F}\), \(\mathbb{P}[\mathrm{dist}(\mathcal{B}(F),F)>\alpha]\leq\beta\).
### The Private Populous Estimator (PPE)
In this section, we define the PPE framework which allows us to use non-private algorithms to design private algorithms. We represent the non-private algorithm by \(\mathcal{A}\colon\mathcal{X}^{*}\to\mathcal{Y}\) which takes elements from a dataset as inputs and outputs an element in \(\mathcal{Y}\). PPE requires two assumptions. Firstly, we assume that \((\mathcal{Y},\mathrm{dist})\) is a semimetric space. Secondly, we assume that we have access to an efficient masking mechanism for \((\mathcal{Y},\mathrm{dist})\).
The PPE framework we introduce in this section can be seen as a somewhat generalized version of the framework used in [2] and requires fewer assumptions. Given a dataset \(D\) as inputs, we partition \(D\) into \(t\) disjoint subsets. Next, we run the non-private algorithm \(\mathcal{A}\) on each of these subsets to produce \(t\) outputs \(Y_{1},\ldots,Y_{t}\). We then privately check if most of the \(t\) outputs are close to each other. If not, PPE fails. Otherwise, it chooses a \(Y_{j}\) that is close to more than \(60\%\) of other \(Y_{i}\)'s. It then adds noise to \(Y_{j}\) using a masking mechanism \(\mathcal{B}\), and returns the masked version of \(Y_{j}\). The formal details of the algorithm can be found in Algorithm 1.
The following theorem establishes the privacy and accuracy of Algorithm 1. The proof can be found in Appendix D.1.
**Theorem 2.3**.: _Suppose that \((\mathcal{Y},\mathrm{dist})\) satisfies a \(z\)-approximate \(r\)-restricted triangle inequality. Further, suppose that \(\mathcal{B}\) is a \((r,\varepsilon,\delta)\)-masking mechanism._
* **Privacy.** _For_ \(t>5\)_, Algorithm_ 1 _is_ \((2\varepsilon,4e^{\varepsilon}\delta)\)_-DP._
* **Utility.**_Suppose \(\alpha\leq r/2z\) and \(t\geq(\frac{20}{\varepsilon}\ln\left(1+\frac{\varepsilon^{\varepsilon}-1}{2 \delta}\right)\). Let \(\mathcal{B}\) be \((\alpha/2z,\beta)\)-concentrated. If there exists \(Y^{*}\) with the property that for all \(i\in[t],\mathrm{dist}(Y^{*},Y_{i})<\alpha/2z\), then \(\mathbb{P}\left[\mathrm{dist}(\widetilde{Y},Y^{*})>\alpha\right]\leq\beta\)._
The utility guarantee asserts that if the outcome of all non-private procedures are close to each other, then the output of the PPE will be close to those non-private outcomes.
**Remark 2.4**.: _Let \(T_{\mathcal{A}}\) be the running time of the algorithm \(\mathcal{A}\) in Line 2, \(T_{\mathrm{dist}}\) be the time to compute \(\mathrm{dist}(Y_{i},Y_{j})\) for any \(Y_{i},Y_{j}\in\mathcal{Y}\) in Line 3, and \(T_{\mathcal{B}}\) be the time to compute \(\widetilde{Y}\) in Line 9. Then Algorithm 1 runs in time \(O(t\cdot T_{\mathcal{A}}+t^{2}\cdot T_{\mathrm{dist}}+T_{\mathcal{B}})\). We will see that \(T_{\mathcal{A}}\), \(T_{\mathcal{B}}\), and \(T_{\mathrm{dist}}\) can be polynomially bounded for GMMs._
To apply Algorithm 1 for private learning of GMMs, we need to introduce a masking mechanism for them. In order to do that, we start by defining a masking mechanism for a single Gaussian component (presented in Section 4). We then show how one can convert a masking mechanism for a component to one for mixtures (Section 3). Finally, we apply this to come up with a masking mechanism for GMMs as shown in Section 5.
## 3 Masking Mixtures
The goal of this section is to show how to "lift" a masking mechanism for a single component to a masking mechanism for mixtures. We can do this by adding noise to each of the components and randomly permute the output components.
Formally, let \(\mathcal{F}\) denote a space and let \(\mathcal{F}^{k}=\mathcal{F}\times\ldots\times\mathcal{F}\) (\(k\) times). The following definition is useful in defining the distance between two mixtures, as it is invariant to the order of components.
**Definition 3.1**.: Let \(\mathrm{dist}\) denote a distance function on \(\mathcal{F}\). We define \(\mathrm{dist}^{k}\colon\mathcal{F}^{k}\times\mathcal{F}^{k}\to\mathbb{R}_{ \geq 0}\) as
\[\mathrm{dist}^{k}((F_{1},\ldots,F_{k}),(F_{1}^{\prime},\ldots,F_{k}^{\prime}) )\coloneqq\min_{\pi}\max_{i\in[k]}\mathrm{dist}(F_{i},F_{\pi(i)}^{\prime}),\]
where the minimization is taken over all permutations \(\pi\).
Note that computing \(\mathrm{dist}^{k}\) requires computing a minimum over all permutations \(\pi\). Naively, one might assume that this requires exponential time to try all permutations. However, it turns out that one can reduce the problem of computing \(\mathrm{dist}^{k}\) to deciding whether a perfect matching exists in a weighted bipartite graph. The details of this argument can be found in Appendix E.3.
**Lemma 3.2**.: _If \(T_{\rm dist}\) is the running time to compute \({\rm dist}\) then \({\rm dist}^{k}\) can be computed in time \(O(k^{2}T_{\rm dist}+k^{3}\log k)\)._
The following definition is useful for extending a masking mechanism for a component to a masking mechanism for a mixture. The important thing is that the components are shuffled randomly in this mechanism, making the outcome independent of the original order of the components.
**Definition 3.3**.: Suppose that \(\mathcal{B}\) is a \((\gamma,\varepsilon,\delta)\)-masking mechanism for \(\mathcal{F}\). We define the mechanism \(\mathcal{B}^{k}_{\sigma}\) as \(\mathcal{B}^{k}_{\sigma}(F_{1},\ldots,F_{k})=(\mathcal{B}(F_{\sigma(1)}), \ldots,\mathcal{B}(F_{\sigma(k)}))\), where \(\sigma\) is a uniform random permutation.
We also note that \(\mathcal{B}^{k}_{\sigma}\) can be computed with only polynomial overhead. The proof can be found in Appendix E.2.
**Lemma 3.4**.: _If \(T_{\mathcal{B}}\) is the running time of \(\mathcal{B}\) then \(\mathcal{B}^{k}_{\sigma}\) can be computed in time \(O(k\cdot T_{\mathcal{B}}+k\log k)\)._
The next lemma shows that \(\mathcal{B}^{k}_{\sigma}\) is indeed a masking mechanism w.r.t. \((\mathcal{F}^{k},{\rm dist}^{k})\) and that \(\mathcal{B}^{k}_{\sigma}\) is accurate provided that \(\mathcal{B}\) is accurate.
**Lemma 3.5**.: _If \(\mathcal{B}\) is an \((\alpha,\beta)\)-concentrated \((\gamma,\varepsilon,\delta)\)-masking mechanism for \((\mathcal{F},{\rm dist})\) then, for any \(\delta^{\prime}>0\), \(\mathcal{B}^{k}_{\sigma}\) is an \((\alpha,k\beta)\)-concentrated \((\gamma,\varepsilon^{\prime},k\delta+\delta^{\prime})\)-masking mechanism for \((\mathcal{F}^{k},{\rm dist}^{k})\) where_
\[\varepsilon^{\prime}=\sqrt{2k\ln(1/\delta^{\prime})}\varepsilon+k\varepsilon( e^{\varepsilon}-1).\]
Proof.: First, we prove privacy. Let \(F=(F_{1},\ldots,F_{k})\in\mathcal{F}^{k}\) and \(F^{\prime}=(F^{\prime}_{1},\ldots,F^{\prime}_{k})\in\mathcal{F}_{k}\) be such that \({\rm dist}^{k}(F,F^{\prime})\leq\gamma\). In other words, there exists a permutation \(\pi\) such that \({\rm dist}(F_{i},F^{\prime}_{\pi(i)})\leq\gamma\) for all \(i\in[k]\). Since \(\mathcal{B}\) is a \((\gamma,\varepsilon,\delta)\)-masking mechanism, we know that \(\mathcal{B}(F_{i}),\mathcal{B}(F^{\prime}_{\pi(i)})\) are \((\varepsilon,\delta)\)-indistinguishable. Thus, by advanced composition (see Theorem C.7), \((\mathcal{B}(F_{1}),\ldots,\mathcal{B}(F_{k}))\) and \((\mathcal{B}(F^{\prime}_{\pi(1)}),\ldots,\mathcal{B}(F^{\prime}_{\pi(k)}))\) are \((\varepsilon^{\prime},k\delta+\delta^{\prime})\)-indistinguishable with \(\varepsilon^{\prime}\) as stated in the lemma. Since \(\mathcal{B}^{k}_{\sigma}((F^{\prime}_{1},\ldots,F^{\prime}_{k}))\) has the same distribution has \(\mathcal{B}^{k}_{\sigma}((F^{\prime}_{\pi(1)},\ldots,F^{\prime}_{\pi(k)}))\), we conclude, using the fact that permutation preserves privacy (see Lemma C.8), that \(\mathcal{B}^{k}_{\sigma}(F)\) and \(\mathcal{B}^{k}_{\sigma}(F^{\prime})\) are \((\varepsilon^{\prime},k\delta+\delta^{\prime})\)-indistinguishable.
Finally, it remains to prove accuracy (i.e. that \(\mathcal{B}^{k}_{\sigma}\) is \((\alpha,k\beta)\)-concentrated). Indeed, given \(F=(F_{1},\ldots,F_{k})\in\mathcal{F}^{k}\), we know that \({\rm dist}(\mathcal{B}(F_{i}),F_{i})\leq\alpha\) with probability at least \(1-\beta\). Thus, by a union bound \({\rm dist}(\mathcal{B}(F_{i}),F_{i})\leq\alpha\) for all \(i\in[k]\) with probability at least \(1-k\beta\). We conclude that \({\rm dist}(\mathcal{B}(F),F)\leq\alpha\) with probability at least \(1-k\beta\).
Recall that Theorem 2.3 requires that the distance function satisfies an \(r\)-restricted \(z\)-approximate. The following lemma shows that \({\rm dist}^{k}\) indeed does satisfy this property provided that \({\rm dist}\) does. The proof can be found in Appendix E.1.
**Lemma 3.6**.: _If \({\rm dist}\) satisfies an \(r\)-restricted \(z\)-approximate triangle inequality then so does \({\rm dist}^{k}\)._
## 4 Masking a Single Gaussian Component
In this section, we develop a masking mechanism for a single Gaussian component. In the following section, we utilize this masking mechanism combined with the general purpose mechanism in Section 3 to develop a masking mechanism for GMMs.
Let \(\mathcal{F}_{\textsc{Comp}}=\mathbb{R}\times\mathbb{R}^{d}\times\mathbb{R}^{d \times d}\) (corresponding to the weight \(w\), mean \(\mu\), and covariance matrix \(\Sigma\), respectively). Define \({\rm dist}_{\textsc{Comp}}\colon\mathcal{F}_{\textsc{Comp}}\times\mathcal{F}_ {\textsc{Comp}}\to\mathbb{R}_{\geq 0}\) as
\[{\rm dist}_{\textsc{Comp}}((w_{1},\mu_{1},\Sigma_{1}),(w_{2},\mu_{2},\Sigma_{2}) )=\max\{|w_{1}-w_{2}|,{\rm dist}_{\textsc{Mean}}((\mu_{1},\Sigma_{1}),(\mu_{2},\Sigma_{2})),{\rm dist}_{\textsc{Cov}}(\Sigma_{1},\Sigma_{2})\},\]
where
\[\operatorname{dist}_{\textsc{Conv}}(\Sigma_{1},\Sigma_{2})=\max\{\|\Sigma_{1}^{1/ 2}\Sigma_{2}^{-1}\Sigma_{1}^{1/2}-I_{d}\|_{F},\|\Sigma_{2}^{1/2}\Sigma_{1}^{-1} \Sigma_{2}^{1/2}-I_{d}\|_{F}\}\]
and
\[\operatorname{dist}_{\textsc{Mean}}((\mu_{1},\Sigma_{1}),(\mu_{2},\Sigma_{2}) )=\max\{\|\mu_{1}-\mu_{2}\|_{\Sigma_{1}},\|\mu_{1}-\mu_{2}\|_{\Sigma_{2}}\}.\]
First, we show that \(\operatorname{dist}_{\textsc{Comp}}\) satisfies an approximate triangle inequality; this is useful in order to use Theorem 2.3.
**Lemma 4.1**.: \(\operatorname{dist}_{\textsc{Comp}}\) _satisfies a \(1\)-restricted \((3/2)\)-approximate triangle inequality._
Proof.: For any positive-definite matrix \(\Sigma\), \(\|\cdot\|_{\Sigma}\) is a metric and thus, \(\operatorname{dist}_{\textsc{Mean}}\) is a metric (and therefore satisfies the \(1\)-restricted \((3/2)\)-approximate triangle inequality). Next, \(\operatorname{dist}_{\textsc{Cov}}\) satisfies the \(1\)-restricted \((3/2)\)-approximate triangle inequality (see Lemma A.8). A straightforward calculation concludes that, as a result, \(\operatorname{dist}_{\textsc{Comp}}\) also satisfies a \(1\)-restricted \((3/2)\)-approximate triangle inequality.
The following lemma gives a masking mechanism for a single Gaussian mechanism. The proof can be found Appendix F. The mechanism essentially noises the mixing weight, the mean, and the covariance matrix separately. For noising the mixing weight, one can do this using the Gaussian mechanism. Care must be taken to noise the mean and the covariance matrix. In both cases, we use the empirical covariance matrix itself to re-scale both the mean and the covariance matrix. Pseudocode for the various pieces can be found in Algorithm 2 (the first four functions). Note that the parameters \(\eta_{\textsc{N}},\eta_{\textsc{Mean}},\eta_{\textsc{Cov}}\) must be set correctly to ensure privacy and accuracy but these details are relegated to Appendix F.
**Lemma 4.2**.: _For \(\gamma\leq\frac{\varepsilon\alpha}{C_{2}\sqrt{d(d+\ln(4/\beta))\cdot\ln(2/ \delta)}}\), there exists a \((\gamma,3\varepsilon,3\delta)\)-masking mechanism, \(\mathcal{B}_{\textsc{Comp}}\), for \((\mathcal{F}_{\textsc{Comp}},\operatorname{dist}_{\textsc{Comp}})\) that is \((\alpha,3\beta)\)-concentrated, where \(C_{2}\) is a universal constant._
## 5 A Masking Mechanism for GMMs
In this section, we show how to mask a mixture of \(k\) Gaussians. Let \(\mathcal{F}_{\textsc{Gmm}}=\mathcal{F}_{\textsc{Comp}}\times\ldots\times \mathcal{F}_{\textsc{Comp}}\) (\(k\) times). Note we drop \(k\) from \(\mathcal{F}_{\textsc{Gmm}}\) (and related notation below) since \(k\) is fixed and implied from context. Let \(\operatorname{dist}_{\textsc{Comp}}\) be as defined in Eq. (4) and define the distance
\[\operatorname{dist}_{\textsc{Param}}(\{(w_{i},\mu_{i},\Sigma_{i})\}_{i\in[k]}, \{(w^{\prime}_{i},\mu^{\prime}_{i},\Sigma^{\prime}_{i})\}_{i\in[k]})=\min_{ \pi}\max_{i\in[k]}\operatorname{dist}_{\textsc{Comp}}((w_{\pi(i)},\mu_{\pi(i)}, \Sigma_{\pi(i)}),(w^{\prime}_{i},\mu^{\prime}_{i},\Sigma^{\prime}_{i})),\]
where \(\pi\) is chosen from the set of all permutations over \([k]\). Now define the masking mechanism
\[\mathcal{B}_{\textsc{Gmm}}(\{(w_{i},\mu_{i},\Sigma_{i})\}_{i\in[k]})=\{ \mathcal{B}_{\textsc{Comp}}(w_{\sigma(i)},\mu_{\sigma(i)},\Sigma_{\sigma(i)}) \}_{i\in[k]},\]
where \(\mathcal{B}_{\textsc{Comp}}\) is the masking mechanism from Lemma 4.2 and \(\sigma\) is a permutation chosen uniformly at random from the set of all permutations over \([k]\). In words, \(\mathcal{B}_{\textsc{Gmm}}\) applies the masking mechanism \(\mathcal{B}_{\textsc{Comp}}\) from Section 4 to each component separately and then permutes the components. To summarize the entire masking mechanism for GMMs, we provide pseudocode in Algorithm 2.
The following lemma asserts that \(\mathcal{B}_{\textsc{Gmm}}\) is indeed a masking mechanism. At a high-level, it follows by combining Lemma 4.2 with Lemma 3.5. The details can be found in Appendix G.1.
**Lemma 5.1**.: _Let \(\varepsilon<\ln(2)/3\). There is a sufficiently large constant \(C_{2}\) such that for \(\gamma\leq\frac{\varepsilon\alpha}{C_{2}\sqrt{k\ln(2/\delta)}\sqrt{d(d+\ln(12k/ \beta))\cdot\ln(12k/\delta)}}\), \(\mathcal{B}_{\textsc{Gmm}}\) is a \((\gamma,\varepsilon,\delta)\)-masking mechanism with respect to \((\mathcal{F}_{\textsc{Gmm}},\operatorname{dist}_{\textsc{Param}})\). Moreover, \(\mathcal{B}_{\textsc{Gmm}}\) is \((\alpha,\beta)\)-concentrated._
Note that \(\operatorname{dist}_{\textsc{Param}}\) also satisfies a \(1\)-restricted \((3/2)\)-approximate triangle inequality since \(\operatorname{dist}_{\textsc{Comp}}\) does (see Appendix G.2 for a proof).
**Lemma 5.2**.: \(\operatorname{dist}_{\textsc{Param}}\) _satisfies a \(1\)-restricted \((3/2)\)-approximate triangle inequality._
**Input:** GMM given by \(\{(w_{i},\mu_{i},\Sigma_{i})\}_{i\in[k]}\) and parameters \(\eta_{\textsc{W}},\eta_{\textsc{Mean}},\eta_{\textsc{Cov}}>0\)
```
1:function\(\mathcal{R}_{\textsc{W}}(w)\)\(\triangleright\) Noise mixing weights
2:Return\(\max(0,w+\eta_{\textsc{W}}g)\) where \(g\sim\mathcal{N}(0,1)\).
3:endfunction
4:function\(\mathcal{R}_{\textsc{Mean}}(\mu,\Sigma)\)\(\triangleright\) Noise mean
5:Return\(\mu+\eta_{\textsc{Mean}}g\) where \(g\sim\mathcal{N}(0,\Sigma)\)
6:endfunction
7:function\(\mathcal{R}_{\textsc{Cov}}(\Sigma)\)\(\triangleright\) Noise covariance
8: Let \(G\in\mathbb{R}^{d\times d}\) matrix with independent \(\mathcal{N}(0,1)\) entries.
9:Return\(\Sigma^{1/2}(I_{d}+\eta_{\textsc{Cov}}G)(I_{d}+\eta_{\textsc{Cov}}G)^{\top} \Sigma^{1/2}\)
10:endfunction
11:function\(\mathcal{B}_{\textsc{Comp}}(w,\mu,\Sigma)\)\(\triangleright\) Mask component
12:Return\((\mathcal{R}_{\textsc{W}}(w),\mathcal{R}_{\textsc{Mean}}(\mu,\Sigma), \mathcal{R}_{\textsc{Cov}}(\Sigma))\)
13:endfunction
14:function\(\mathcal{B}_{\textsc{Gmm}}(\{(w_{i},\mu_{i},\Sigma_{i})\}_{i\in[k]})\)\(\triangleright\) Mask GMM
15: Let \(\sigma\) be uniformly random permutation.
16:\(\{(\hat{w}_{i},\hat{\mu},\hat{\Sigma}_{i})\}\leftarrow\{\mathcal{B}_{\textsc{ Comp}}(w_{\sigma(i)},\mu_{\sigma(i)},\Sigma_{\sigma(i)})\}\).
17: Normalize: \(\hat{w}_{i}\leftarrow\hat{w}_{i}/\sum_{i\in[k]}\hat{w}_{i}\).
18:Return\(\{(\hat{w}_{i},\hat{\mu},\hat{\Sigma}_{i})\}_{i\in[k]}\).
19:endfunction
```
**Algorithm 2** GMM Masking Mechanism
## 6Privately Learning GMMs
At this point, we have everything we need to develop a private algorithm for learning the parameters of a GMM. First, we define the problem more formally.
**Definition 6.1** (PAC Learning of Parameters of GMMs).: Let \(\mathcal{F}=\left\{\left(w^{j}_{i},\mu^{j}_{i},\Sigma^{j}_{i}\right)_{i=1}^{k} \right\}^{j}\) be a class of \(d\)-dimensional GMMs with \(k\) components5. Let \(\mathcal{A}\) be function that receives a sequence \(S\) of instances in \(\mathbb{R}^{d}\) and outputs a mixture \(\hat{F}=(\hat{w}_{i},\hat{\mu}_{i},\hat{\Sigma}_{i})_{i=1}^{k}\). Let \(m\colon(0,1)^{2}\times\mathbb{N}^{2}\to\mathbb{N}\). We say \(\mathcal{A}\) learns the parameters of \(\mathcal{F}\) with \(m\) samples if for every \(\alpha,\beta\in(0,1)\) and every \(F\in\mathcal{F}\), if \(S\) is an i.i.d. sample of size \(m(\alpha,\beta,k,d)\) from \(F\), then \(\operatorname{dist}_{\textsc{Gmm}}(F,\hat{F})<\alpha\) with probability at least \(1-\beta\).
Footnote 5: For examples, it is standard to pick \(\mathcal{F}\) to be those GMMs that are separable/identifiable.
Plugging the masking mechanism developed in Section 5 (in particular, Lemma 5.1 and Lemma 5.2) into PPE (Theorem 2.3) gives a private to non-private reduction for GMMs.
**Theorem 6.2** (Private to Non-Private Reduction).: _Let \(\mathcal{F}\) be a subclass of GMMs with \(k\) components in \(\mathbb{R}^{d}\). Let \(\mathcal{A}\) be a non-private Algorithm that PAC learns the parameters of \(\mathcal{F}\) with respect to \(\operatorname{dist}_{\textsc{Gmm}}\) using \(m_{\textsc{non-private}}(\alpha,\beta,k,d)\) samples. Then for every \(\varepsilon<\ln(2)/3\), \(\delta\in(0,1)\), \(\gamma\leq\frac{\varepsilon\alpha}{C_{2}\sqrt{k\ln(2/\delta)}\sqrt{d(d+\ln(12k /\beta))\cdot\ln(12k/\delta)}}\) for a sufficiently large constant \(C\) and \(t=\max\{5,\lceil\frac{20}{\varepsilon}\ln(1+\frac{\varepsilon^{\prime}-1}{2 \delta})\rceil\}\), there is a learner \(\mathcal{A}_{\textsc{private}}\) with the following properties:_
1. \(\mathcal{A}_{\textsc{private}}\) _is_ \((2\varepsilon,4e^{\varepsilon}\delta)\)_-DP._
2. \(\mathcal{A}_{\textsc{private}}\) _PAC learns the parameters of_ \(\mathcal{F}\) _using_ \(O(m_{\textsc{non-private}}(\gamma,\beta/2t,k,d)\log(1/\delta)/\varepsilon)\) _samples._
3. \(\mathcal{A}_{\textsc{private}}\) _runs in time_ \(O((\log(1/\delta)/\varepsilon)\cdot T_{\mathcal{A}}+(\log(1/\delta)/ \varepsilon)^{2}\cdot(k^{2}d^{3}+k^{3}\log k))\)_, where_ \(T_{\mathcal{A}}\) _is the running time for the non-private algorithm._
To prove Theorem 6.2, we require the following lemma whose proof can be found in Appendix H.
**Lemma 6.3**.: _Let \(F=(w_{i},\mu_{i},\Sigma_{i})_{i=1}^{k}\) and \(F^{\prime}=(w^{\prime}_{i},\mu^{\prime}_{i},\Sigma^{\prime}_{i})_{i=1}^{k}\) be two \(d\)-dimensional GMMs where \(\Sigma_{i}\) and \(\Sigma^{\prime}_{i}\) are positive-definite matrices. Suppose that \(\operatorname{dist}_{\textsc{GMM}}\left(F,F^{\prime}\right)<\frac{1}{600}\). Then \(\frac{1}{200}\operatorname{dist}_{\textsc{Param}}(F,F^{\prime})\leq\frac{1}{ \sqrt{2}}\operatorname{dist}_{\textsc{Param}}(F,F^{\prime})\)._
Proof of Theorem 6.2.: Let \(z=3/2\), \(r=1\), and \(t\geq\frac{20}{\varepsilon}\ln\left(1+\frac{e^{\varepsilon}-1}{2\delta} \right)=O(\log(1/\delta)/\varepsilon)\). We run Algorithm 1 with the following.
* For the non-private algorithm \(\mathcal{A}\), we use the algorithm from Theorem 6.5 with accuracy parameter \(\alpha/2z\) and failure probability \(\beta/2t\).
* For the masking mechanism, we use the \((r,\varepsilon,\delta)\)-masking mechanism \(\mathcal{B}_{\textsc{GMM}}\) which is defined in Lemma 5.1. Further, this mechanism is \((\alpha/2z,\beta/2)\)-concentrated.
* Finally, note that the distance function \(\operatorname{dist}_{\textsc{Param}}\) satisfies the \(z\)-approximate \(r\)-restricted triangle inequality (Lemma 5.2).
Let \(F^{*}\) be the true GMM. Let \(F_{i}\) be the estimated GMMs computed by \(\mathcal{A}\) in Line 2 of Algorithm 1. Then the first item above guarantees that \(\operatorname{dist}_{\textsc{Param}}(F^{*},F_{i})\leq\alpha/2z\) for all \(i\in[t]\) with probability at least \(1-\beta/2\).
We thus conclude that we have a private algorithm for learning GMMs that is \((2\varepsilon,4e^{\varepsilon}\delta)\)-DP and that returns \(\widetilde{F}\) satisfying \(\operatorname{dist}_{\textsc{Param}}(\widetilde{F},F^{*})\leq\alpha\) with probability \(1-\beta\). By Lemma 6.3, we further conclude that \(\operatorname{dist}_{\textsc{GMM}}(\widetilde{F},F^{*})\leq O(\alpha)\) with probability \(1-\beta\).
It remains to check the sample complexity and computational complexity of our algorithm. Since we run \(t\) independent instances of the non-private algorithm \(\mathcal{A}\), we require \(t\cdot m_{\textsc{private}}(\alpha/2z,\beta/2t,k,d)=O(m_{\textsc{private}}( \alpha/2z,\beta/2t,k,d)\cdot\log(1/\delta)/\varepsilon)\) samples. Finally, we bound the running time. Lemma 3.4 shows that the running time to apply the masking mechanism is \(O(k\cdot d^{3}+k\log k)\) and Lemma 3.2 shows that the running time to compute dist is \(O(k^{2}d^{3}+k^{3}\log k)\). The claimed running time now follows from Remark 2.4.
### Application
As a concrete application, we apply Theorem 6.2 with the algorithm of [14] to obtain the first private algorithm for learning the parameters of a GMM with sample and computational complexity that is polynomial in \(d\) (for a fixed \(k\)) with minimal separation assumptions. Note that our algorithm does not require any boundedness assumptions on the parameters.
**Definition 6.4** (\(\gamma\)-Statistically Learnable [14]).: We say a GMM \(F=(w_{i},\mu_{i},\Sigma_{i})_{i=1}^{k}\) is \(\gamma\)-statistically learnable if (i) \(\min_{i}w_{i}\geq\gamma\) and (ii) \(\min_{i\neq j}d_{\mathrm{TV}}\left(\mathcal{N}(\mu_{i},\Sigma_{i}),\mathcal{N} (\mu_{j},\Sigma_{j})\right)\geq\gamma\).
If a GMM is \(\gamma\)-statistically learnable, we will be able to recover its components accurately.
**Theorem 6.5** (Non-private Learning of GMMs [14]).: _There exists an algorithm \(\mathcal{A}\) and a function \(m_{\mathcal{A}}(d,k,\alpha,\beta)\) with the following guarantee. Fix \(\alpha,\beta\in(0,1)\), \(k,d\in\mathbb{N}\)._
* _For fixed_ \(k\)_, the sample complexity_ \(m_{\mathcal{A}}(d,k,\alpha,\beta)\) _is polynomial in_ \(d/\alpha\beta\)_._
* _For fixed_ \(k\)_,_ \(\mathcal{A}\) _runs in time_ \(\operatorname{poly}(d/\alpha\beta)\)_._
* _Let_ \(\mathcal{F}^{*}\) _be an_ \(\alpha\)_-statistically learnable subclass of GMMs with_ \(k\) _components in_ \(\mathbb{R}^{d}\) _and let_ \(F^{*}\in\mathcal{F}^{*}\)_. Given an i.i.d. sample_ \(D\) _of size_ \(m_{\mathcal{A}}(d,k,\alpha,\beta)\) _drawn from_ \(F^{*}\)_, with probability at least_ \(1-\beta\)_,_ \(\mathcal{A}\) _return_ \(\hat{F}\) _such that_ \(\operatorname{dist}_{\textsc{GMM}}(\hat{F},F^{*})\leq\alpha\)_._
The following corollary follows immediately by plugging Theorem 6.5 into Theorem 6.2.
**Corollary 6.6**.: _There exists an algorithm \(\mathcal{A}\) and a function \(m_{\mathcal{A}}(d,k,\alpha,\beta,\varepsilon,\delta)\) with the following guarantee. Fix \(\alpha,\beta,\varepsilon,\delta\in(0,1)\), \(k,d\in\mathbb{N}\)._
* \(\mathcal{A}\) _is_ \((\varepsilon,\delta)\)_-DP._
* _For fixed_ \(k\)_, the sample complexity_ \(m_{\mathcal{A}}(d,k,\alpha,\beta,\varepsilon,\delta)\) _is polynomial in_ \(d\log(1/\delta)/\alpha\beta\varepsilon\)_._
* _For fixed_ \(k\)_,_ \(\mathcal{A}\) _runs in time_ \(\operatorname{poly}(d\log(1/\delta)/\alpha\beta\varepsilon)\)_._
* _Let_ \(\mathcal{F}^{*}\) _be an_ \(\alpha\)_-statistically learnable subclass of GMMs with_ \(k\) _components in_ \(\mathbb{R}^{d}\) _and let_ \(F^{*}\in\mathcal{F}^{*}\)_. Given an i.i.d. sample_ \(D\) _of size_ \(m_{\mathcal{A}}(d,k,\alpha,\beta,\varepsilon,\delta)\) _drawn from_ \(F^{*}\)_, with probability at least_ \(1-\beta\)_,_ \(\mathcal{A}\) _return_ \(\hat{F}\) _such that_ \(\operatorname{dist}_{\textsc{GMM}}(\hat{F},F^{*})\leq\alpha\)_._
| 私達は d 次元ガウス混合モデル (GMM) のパラメータを非公開で推定する問題を研究しています。このために、非公開の対照的な問題に切り替えるための技術を開発しました。これにより、既存の非公開アルゴリズムをブラックボックスでプライバシー化することができ、サンプル複雑度と実行時間がわずかに増加します。このフレームワークの主な適用例として、Moitra and Valiant [MV10] の非公開アルゴリズムをブラックボックスとして使用して、ε, δ の差分 privéeなアルゴリズムを開発しました。これにより、GMM を非公開で学習するための最初のサンプル複雑度上限と、パラメータのboundednessassumptionsなしでポリオニアルタイムアルゴリズムが得られました。また、この分析の一部として、高次元ガウスの totale variation distance の間のタイト(定数倍)の lower boundを証明しました。 |
2306.16052 | SVNR: Spatially-variant Noise Removal with Denoising Diffusion | Denoising diffusion models have recently shown impressive results in
generative tasks. By learning powerful priors from huge collections of training
images, such models are able to gradually modify complete noise to a clean
natural image via a sequence of small denoising steps, seemingly making them
well-suited for single image denoising. However, effectively applying denoising
diffusion models to removal of realistic noise is more challenging than it may
seem, since their formulation is based on additive white Gaussian noise, unlike
noise in real-world images. In this work, we present SVNR, a novel formulation
of denoising diffusion that assumes a more realistic, spatially-variant noise
model. SVNR enables using the noisy input image as the starting point for the
denoising diffusion process, in addition to conditioning the process on it. To
this end, we adapt the diffusion process to allow each pixel to have its own
time embedding, and propose training and inference schemes that support
spatially-varying time maps. Our formulation also accounts for the correlation
that exists between the condition image and the samples along the modified
diffusion process. In our experiments we demonstrate the advantages of our
approach over a strong diffusion model baseline, as well as over a
state-of-the-art single image denoising method. | Naama Pearl, Yaron Brodsky, Dana Berman, Assaf Zomet, Alex Rav Acha, Daniel Cohen-Or, Dani Lischinski | 2023-06-28T09:32:00 | http://arxiv.org/abs/2306.16052v1 | # SVNR: Spatially-variant Noise Removal with Denoising Diffusion
###### Abstract
Denoising diffusion models have recently shown impressive results in generative tasks. By learning powerful priors from huge collections of training images, such models are able to gradually modify complete noise to a clean natural image via a sequence of small denoising steps, seemingly making them well-suited for single image denoising. However, effectively applying denoising diffusion models to removal of realistic noise is more challenging than it may seem, since their formulation is based on additive white Gaussian noise, unlike noise in real-world images. In this work, we present SVNR, a novel formulation of denoising diffusion that assumes a more realistic, spatially-variant noise model. SVNR enables using the noisy input image as the starting point for the denoising diffusion process, in addition to conditioning the process on it. To this end, we adapt the diffusion process to allow each pixel to have its own time embedding, and propose training and inference schemes that support spatially-varying time maps. Our formulation also accounts for the correlation that exists between the condition image and the samples along the modified diffusion process. In our experiments we demonstrate the advantages of our approach over a strong diffusion model baseline, as well as over a state-of-the-art single image denoising method.
+
Footnote †: dagger}\) Performed this work while working at Google.
## 1 Introduction
Image denoising, the task of removing unwanted noise from an image, while preserving its original features, is one of the most longstanding problems in image processing. Over the years, numerous image denoising techniques have been developed, ranging from traditional filtering-based methods to more recent deep learning-based approaches, _e.g._, [24, 10, 38, 9, 13].
In modern real-world digital photographs, noise most commonly arises from the imaging sensor, and is particularly evident when images are captured in low-light conditions. Yet, many of the proposed approaches make unrealistic assumptions regarding the noise and/or assess the denoising performance using metrics such as PSNR or SSIM. Such metrics struggle with the distortion-perception trade-off [4] as they are sensitive to pixel alignment and do not emphasize the restoration of fine details or high-frequency textures, which may be difficult to distinguish from noise.
In this paper, we propose a new denoising approach that leverages the natural image prior learned by today's powerful diffusion-based generative models [15, 12]. Such models have been successfully applied to a variety of image restoration tasks [32, 30, 17, 18]. Furthermore, they pos
Figure 1: **Top:**_spatially-variant_ standard deviation of noise (quantized), the resulting noisy image, and the ground truth clean image. Our SVNR formulation handles such noise by applying a pixel-wise time embedding. **Bottom:** state-of-the-art denoising methods manage to remove high levels of noise but over-smooth fine details. Diffusion based models are able to recover textures in the image even when they are hard to distinguish in the noisy image. SVNR yields clean images of higher fidelity (part of the lizard’s head is missing in the baseline result), while reducing the runtime \(\sim\times 10\).
sess innate denoising capabilities, since the entire generation process is based on gradual denoising of images. Thus, one might expect that it should be possible to reconstruct a clean image simply by starting the diffusion process from the noisy input image. However, the diffusion process is based on additive white Gaussian noise (AWGN), while realistic noise models involve a signal-dependent component, the so-called shot-noise, which leads to higher noise levels in brighter parts of the image [20]. This violates the denoising diffusion formulation that associates a single scalar noise level (time) with each step, making it non-trivial to apply the diffusion process to realistic noise removal.
In this work, we present SVNR, a novel denoising diffusion formulation that handles _spatially-varying noise_, thereby enabling the reverse process to start from realistic noisy images, while significantly reducing the number of necessary diffusion steps.
Specifically, SVNR adapts the denoising diffusion framework to utilize the noisy input image as both the condition and the starting point. We assume a realistic signal-dependent noise model (Section 3.1), with a spatially-variant noise distribution. To cope with such a noise distribution, we adapt the diffusion process to allow each pixel to have its own time embedding, effectively assuming that the denoising time step is spatially-varying, rather than constant, across the image. We further present training and inference schemes that support such spatially-varying time maps. Our training scheme also accounts for correlation between the condition image and the samples of the diffusion process, which stems from the fact that the reverse process starts with the same image it is conditioned on.
The spatially-variant time embedding, together with the associated training scheme, enables using the noisy input image as both the condition and the starting point for the denoising process, yielding higher quality clean images (Fig. 1), while allowing significantly fewer denoising steps (Fig. 2). We demonstrate the power of the SVNR framework on simulated noisy images exhibiting a wide variety of noise levels and show its ability to generate fine details, such as fur and intricate textures. We show that our framework outperforms the standard conditioned diffusion baseline quantitatively, as well as visually, while avoiding the over-smoothing of a state-of-the-art single-image denoising method [9].
## 2 Background and Related Work
### Image noise models
Cameras sensors convert incident photons to voltage readings, which are then converted to bits by an analog to digital converter (ADC). Throughout this process, noise is unavoidably added to the measurement, depending both on photon statistics and the sensor's circuits. Sensor noise is often modeled as a combination of two primary components [23]: shot noise, which originates from photon arrival statistics and is modeled as a Poisson process depending on signal intensity, and read noise, which is caused by imperfections in the readout circuitry and is modeled as a Gaussian noise with standard deviation \(\sigma_{r}\).
### Single image denoising
Early works for single image denoising used prior knowledge like non-local self-similarity in BM3D [10] or total variation [24].
Recently, convolutional neural networks (CNNs) have shown their success in single image denoising, as summarized in this comprehensive survey [13]. The following methods require a clean target image to train the CNNs. Initially, they were trained on synthetically added i.i.d. Gaussian noise, however that practice fails to generalize to real noisy images [27]. Later, datasets of real noisy images with their clean counterparts were collected (SIDD [1], RENOIR [2]), and are commonly used for denoising evaluation. As shown in [34], learning the noise distribution of real images via a GAN, which is used to synthesize noise for a denoising network, significantly improves performance. DnCNN [38] predicts the residual image (the noise) of a noisy image. Many works improved the performance by choosing better architectural components: SADNet [6] proposes a deformable convolution to adjust for different textures and noise patterns, HINet [9] introduces instance normalization block for image restoration tasks and NArNet [8] suggests to replace non linear activation functions by element-wise multiplication between two sets of channels. Some methods iteratively solve the problem in a multi-scale architecture or in multiple iterations: MPRNet [37] proposes supervised attention block between the different stages to leverage the restored image features at different scales. Somewhat similarly to our work, FFDNet [39] employs a spatially-varying noise-map, and is able to remove non-uniform noise. However the architecture of FFDNet relies on downsampling and channel re-shuffle before applying a CNN to the image, which is different than the proposed approach.
Unlike the above works, which require clean target images, another line of works focuses on unsupervised or self-supervised solutions. According to N2N [19], the expected value of minimizing the objective with respect to clean samples is similar to minimizing it with respect to different noisy samples, and therefore clean images are not necessary. Further works designed different ways for data augmentation that achieve the same purpose. N2S [3], Noisier2noise [22], R2R [25], neighbor2neighbor [16] use different subsamples of the image as instances of the noisy image. IDR [41] added noise to the noisy image to create a noisier version which can be supervised by the noisy image.
#### 2.2.1 Raw single image denoising / low light methods
Some methods take into account the image formation model and aim to denoise the raw image, where the pixel values directly relate to the number of incident photons and the noise can be better modeled. To tackle the task of low-light imaging directly, SID [7] introduces a dataset of raw short-exposure low-light images paired with corresponding long-exposure reference images. They train an end-to-end CNN to perform the majority of the steps of the image processing pipeline: color transformations, demosaicing, noise reduction, and image enhancement. Brooks [5] present a technique to "unprocess" the image processing pipeline in order to synthesize realistic raw sensor images, which can be further used for training. Wei [35] accurately formulate the noise formation model based on the characteristics of CMOS sensors. Punnappurath [28] suggest a method that generates nighttime images from day images. Similarly, in the field of low light video, Monakhova [21] learn to generate nighttime frames of video.
### Diffusion models
The usage of diffusion models for generative tasks grew rapidly over the past years, and have shown great success in text-to-image generation (Imagen [31], DALL-E 2 [29]). Denoising is a key component of the diffusion process, offering a strong image prior for both restoration and generative tasks. SR3 [32] adapts denoising diffusion probabilistic models to solve the super resolution task, conditioned on the low resolution image. Palette [30] extended this idea to a general framework for image-to-image translation tasks, including colorization, inpainting, uncropping, and JPEG restoration. In our evaluation, we compare to this method as a baseline, where the noisy image is given as a prior, but without modifying the diffusion formulation. Kawar [18, 17] solve linear inverse image restoration problems by sampling from the posterior distribution, based on a pre-trained denoising diffusion model. This approach is limited to linear problems, whereas a realistic noise model is signal-dependant and not additive Gaussian. In a concurrent work, Xie [36] redefine the diffusion process to implement generative image denoising, however it is defined for different types of noise (Gaussian, Poisson) separately, while a realistic noise model is a combination of both.
## 3 Method
Our main goal in this work is to leverage the powerful denoising-based diffusion framework for noise removal. To this end, we adapt the framework to enable the noisy input image to be considered as a time step in the diffusion process. Accounting for the more complex nature of real camera noise, we propose a diffusion formulation that unifies realistic image noise with that of the diffusion process. In Section 3.1, we describe the camera noise model that we use, and in Sections 3.2-3.3 we propose a diffusion process that can incorporate such noisy images as its samples.
For a more realistic modeling of noisy images, we consider a raw-sensor noise model, which is not uniform across the image. This means that we cannot pair a step in the diffusion process with a single point in time. Instead, we pair each diffusion step with a spatially varying _time map_, where each pixel may have a different time encoding (Section 3.3). The training and the inference schemes are modified to support such time maps, as described in Section 3.4.
In particular, the starting point of the diffusion process is set to the noisy input image, and not to an i.i.d Gaussian noise. This has the additional advantage of significantly reducing the number of diffusion steps (\(\sim 50\) times fewer steps in our experiments), see Fig. 2. However, using the same noisy input image as both the condition and the starting point of the diffusion process, introduces another challenge: there is a correlation between the condition and the samples along the reverse diffusion process at inference time, a correlation that is not reflected in the training scheme. We address this challenge in Section 3.5, give a theoretical analysis of this phenomenon and propose a modified training scheme to overcome it.
Notation and setting:Below we use small italics (, ) to denote scalars, while bold roman letters (, ) denote vectors. Images and other per-pixel maps are represented as vectors in. In particular, is a noise vector with the same dimensions, whose elements are sampled from. The operations between two vectors \(\mathbf{a}\) and \(\mathbf{b}\), denote element-wise multiplication and division respectively.
### Noise model
We adopt a noise model that is commonly used for sensor raw data [20, 26]. The noisy version of a
Figure 2: **Top: standard forward diffusion process (2). The reverse denoising process starts from complete noise (left) and iterates for \(1000\) time-steps. Bottom: our diffusion formulation enables starting the reverse diffusion process from the noisy input image, requiring \(\sim\!20\) iterations.**
clean linear image \(\mathbf{x}_{0}\in\mathbb{R}^{H\times W\times 3}\) is given by:
\[\begin{split}&\mathbf{y}=\mathbf{x}_{0}+\boldsymbol{\sigma_{p}} \cdot\boldsymbol{\epsilon_{y}},\quad\boldsymbol{\epsilon_{y}}\sim\mathcal{N} \left(\mathbf{0},\mathbf{I}\right),\\ &\boldsymbol{\sigma_{p}}\triangleq\sqrt{\sigma_{r}^{2}+\sigma_{s }^{2}\mathbf{x}_{0}},\end{split} \tag{1}\]
where \(\boldsymbol{\epsilon_{y}}\in\mathbb{R}^{H\times W\times 3}\) and \(\boldsymbol{\sigma_{p}}\) is the per-pixel standard deviation of the noise, defined as a combination of \(\sigma_{r}\), the standard deviation for the _signal-independent_ read-noise, and \(\sigma_{s}\) for the _signal-dependent_ shot-noise. See Section 4.1 for further details regarding our experiments.
### Diffusion process definition
Given a clean image \(\mathbf{x}_{0}\) and a noise schedule \(\left\{\beta_{t}\right\}_{t=1}^{T}\), the standard diffusion process of length \(T\) is given by:
\[\begin{split}& q\left(\mathbf{x}_{t}|\mathbf{x}_{t-1}\right)= \mathcal{N}\left(\mathbf{x}_{t};\sqrt{1-\beta_{t}}\mathbf{x}_{t-1},\beta_{t} \mathbf{I}\right),\\ &\bar{\alpha}_{t}=\prod_{i=1}^{t}\alpha_{i}=\prod_{i=1}^{t}(1- \beta_{i}),\\ & q\left(\mathbf{x}_{t}|\mathbf{x}_{0}\right)=\mathcal{N}\left( \mathbf{x}_{t};\sqrt{\bar{\alpha}_{t}}\mathbf{x}_{0},(1-\bar{\alpha}_{t}) \mathbf{I}\right).\end{split} \tag{2}\]
Note that this formulation defines a Markovian process, i.e., the variance of \(\mathbf{x}_{t}\) along the process is constant (assuming \(\mathbb{E}(\mathbf{x}_{0})=0\) and \(\mathrm{Var}\left(\mathbf{x}_{0}\right)=1\)). As the noise level increases, the stationary nature of \(\mathbf{x}_{t}\) is achieved by attenuating the clean signal by a factor of \(\sqrt{\bar{\alpha}_{t}}\). To be able to refer to \(\mathbf{y}\) as a sample from the diffusion process, we need to overcome two obstacles. The first issue is that in our noise model, the signal is not attenuated, and the second is that our noise model uses a spatially-varying noise distribution. We first resolve the former issue and modify the diffusion process to be non-stationary, by considering a process which does not attenuate the signal:
\[\begin{split}& q\left(\mathbf{x}_{t}|\mathbf{x}_{t-1}\right)= \mathcal{N}\left(\mathbf{x}_{t};\mathbf{x}_{t-1},\eta_{t}\mathbf{I}\right),\\ & q\left(\mathbf{x}_{t}|\mathbf{x}_{0}\right)=\mathcal{N}\left( \mathbf{x}_{t};\mathbf{x}_{0},\gamma_{t}\mathbf{I}\right),\\ &\gamma_{t}=\sum_{i=1}^{t}\eta_{i},\end{split} \tag{3}\]
for some noise schedule \(\left\{\eta_{t}\right\}_{t=1}^{T}\). This process, where \(\mathrm{Var}\left(\mathbf{x}_{t}|\mathbf{x}_{0}\right)\rightarrow\infty\) as \(t\rightarrow\infty\), is termed "Variance Exploding" by Song _et al_. [33].
We wish to keep the noise schedule similar to the original DDPM schedule [15]. Hence we choose the noise schedule \(\eta_{t}\) so that \(\gamma_{t}\) will be a scaled version of \(1-\bar{\alpha}_{t}\), that is, \(\gamma_{t}=\lambda\left(1-\bar{\alpha}_{t}\right)\) for some \(\lambda\). This implies,
\[\eta_{t}=\lambda\beta_{t}\Pi_{i=1}^{t-1}(1-\beta_{i}). \tag{4}\]
This non-stationary forward process, yields a reverse process of the same form as in the standard diffusion,
\[\begin{split}& q\left(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{x}_{0} \right)=\mathcal{N}\left(\mathbf{x}_{t-1};\boldsymbol{\tilde{\mu}_{t}}\left( \mathbf{x}_{t},\mathbf{x}_{0}\right),\tilde{\eta}_{t}\mathbf{I}\right),\\ &\boldsymbol{\tilde{\mu}_{t}}\left(\mathbf{x}_{t},\mathbf{x}_{0} \right)=\frac{\gamma_{t-1}}{\gamma_{t}}\mathbf{x}_{t}+\frac{\eta_{t}}{\gamma_{t }}\mathbf{x}_{0},\\ &\tilde{\eta}_{t}=\frac{\gamma_{t-1}\eta_{t}}{\gamma_{t}}.\end{split} \tag{5}\]
The fact that our noise model does not attenuate the clean signal \(\mathbf{x}_{0}\) is reflected in the expression for \(\boldsymbol{\tilde{\mu}_{t}}\), that lacks the multiplication by the attenuation factor \(\alpha,\bar{\alpha}\). More details can be found in the supplementary materials.
At inference time, the diffusion process should start with \(\mathbf{x}_{T}=\mathbf{x}_{0}+\sqrt{\lambda}\boldsymbol{\epsilon_{T}},\ \boldsymbol{ \epsilon_{T}}\sim\mathcal{N}\left(\mathbf{0},\mathbf{I}\right)\). Note that in our noise model one cannot start the reverse process from pure noise (as done in standard diffusion processes), since the signal is not attenuated to \(0\). However, since our goal is to start the reverse process from the input noisy image, this is not a concern.
### Spatially-variant time embedding
Our noise schedule, Eq. (3), defines a noise level \(\gamma_{t}\) for every integer \(t\) between \(0\) and \(T=1000\). As in standard diffusion models, we can extend the definition of \(\gamma_{t}\) to non-integer \(t\) using interpolation. Thus, given a noise level \(\sigma^{2}\), we can find a time \(t\) at which this noise level is attained. Consider now our camera noise model, Eq. (1). Each pixel \(p\) has a different noise level \(\boldsymbol{\sigma}_{\boldsymbol{p}}^{2}(p)\), and thus a corresponding time value that yields this noise level. The maximum noise level over the three channels defines a time map \(\mathbf{T}^{*}\in\mathbb{R}^{H\times W}\) for which \(\boldsymbol{\gamma}_{\mathbf{T}^{*}(p)}=\max_{c\in\text{R,G,B}}\boldsymbol{ \sigma}_{\boldsymbol{p}}^{2}(p_{c})\). In other words, we think of each pixel as being at its own stage of the diffusion process. Note that the time map \(\mathbf{T}^{*}\) encodes the spatially-varying noise of the entire input image \(\mathbf{y}\). Hence we denote
\[\mathbf{x}_{\mathbf{T}^{*}}\triangleq\mathbf{y},\quad\boldsymbol{\epsilon_{ \mathbf{T}^{*}}}\triangleq\boldsymbol{\epsilon_{y}},\quad\boldsymbol{\gamma}_{ \mathbf{T}^{*}}\triangleq\max_{\text{R,G,B}}\boldsymbol{\sigma}_{\boldsymbol{p}}^{2}. \tag{6}\]
In practice, when presented with a noisy image \(\mathbf{y}\), we do not know the actual noise level \(\boldsymbol{\sigma_{p}}\), even if \(\sigma_{r}\) and \(\sigma_{s}\) are known, since the original clean signal \(\mathbf{x}_{0}\) is not available. Thus, we follow common practice [20] and estimate it using a clipped version of the noisy image, to obtain \(\hat{\mathbf{T}}^{*}\) such that
\[\begin{split}&\boldsymbol{\gamma}_{\hat{\mathbf{T}}^{*}}=\max_{ \text{R,G,B}}\boldsymbol{\hat{\sigma}_{\boldsymbol{p}}^{2}}\\ &\boldsymbol{\tilde{\sigma}_{\boldsymbol{p}}^{2}}=\sqrt{\sigma_{r}^{2}+ \sigma_{s}^{2}\ \cdot\ \mathrm{clip}\left(\mathbf{y},0,1\right)}.\end{split} \tag{7}\]
A standard diffusion model receives as input both \(\mathbf{x}_{t}\) and a time value \(t\), indicating the signal noise level over the entire image. An embedding vector of the time is then used to apply an affine transformation independently to each pixel feature in \(\mathbf{x}_{t}\). By replacing \(t\) with a spatially-varying time map \(\mathbf{T}^{*}\), and computing a different time embedding per
pixel, we can make the model dependent on the spatially-varying noise level \(\mathbf{\sigma_{p}}\). However, since each pixel can now be at a different stage of the diffusion process, it requires a different number of steps to reach time \(0\). Hence, we need to develop new training and inference schemes to account for this, which are presented below.
### Training and inference schemes
Our diffusion model receives as input a noisy image \(\mathbf{y}\) and a time map \(\mathbf{T}^{*}\). We present training and inference schemes that account for this change. Our algorithm is summarized in Algs. 1 and 2.
Note that the reverse diffusion process, Eq. (5), operates on each pixel independently. Thus, we can use the same reverse process even with a spatially-varying time step \(\mathbf{T}^{*}\). However, each pixel may require a different number of steps before reaching time \(0\). We handle this by stopping the reverse process once a pixel reaches a negative time. In other words, the time map after \(t_{0}\) denoising steps will be \((\mathbf{T}^{*}-t_{0})^{+}\triangleq\max\{\mathbf{T}^{*}-t_{0},0\}\).
During training, given a clean image \(\mathbf{x}_{0}\), we sample \(\sigma_{r}\), \(\sigma_{s}\), and a random noise \(\mathbf{\epsilon_{\mathbf{y}}}=\mathbf{\epsilon}_{T^{*}}\). The noisy image \(\mathbf{y}\) is then generated according to the noise model Eq. (1), and the estimated induced time map \(\mathbf{\hat{T}}^{*}\) is calculated by Eq. (7). Next, we sample a scalar \(t_{0}\) between 0 and the maximal value of \(\mathbf{\hat{T}}^{*}\), and advance the times of all the pixels by \(t_{0}\) steps, to obtain \(\mathbf{\hat{t}}=(\mathbf{\hat{T}}^{*}-t_{0})^{+}\). We then sample a random Gaussian noise \(\mathbf{\epsilon_{\mathbf{\hat{t}}}}\) and construct a sample \(\mathbf{x_{\mathbf{\hat{t}}}}=\mathbf{x}_{0}+\mathbf{\gamma_{\mathbf{\hat{t}}}}\bm {\epsilon_{\mathbf{\hat{t}}}}\) of the diffusion process according to Eq. (3). Note that \(\mathbf{\gamma_{\mathbf{\hat{t}}}}\) is a matrix, so the noise level is spatially-varying. The network then tries to predict \(\mathbf{\epsilon_{\mathbf{\hat{t}}}}\) from the diffusion sample \(\mathbf{x_{\mathbf{\hat{t}}}}\), the time map \(\mathbf{\hat{t}}\), and the condition image \(\mathbf{y}\).
At inference time, we get a noisy image \(\mathbf{y}\) and its \(\sigma_{r},\sigma_{s}\). First, we estimate the time map \(\mathbf{\hat{T}}^{*}\) by Eq. (7). We feed the network with \(\mathbf{y}\) as the condition image, \(\mathbf{\hat{T}}^{*}\) as the time map, and \(\mathbf{y}=\mathbf{x_{T^{*}}}\) as the diffusion sample. The network outputs an estimate of the noise \(\mathbf{\epsilon_{\mathbf{\hat{T}}^{*}}}\), from which we can compute an estimate of the original image \(\mathbf{\hat{x}_{0}}\). We then use the reverse process Eq. (5) (replacing \(\mathbf{x}_{0}\) by \(\mathbf{\hat{x}_{0}}\)) to produce the next sample. Additionally, we promote the time map \(\mathbf{\hat{T}}^{*}\) by one step, _i.e_., we replace \(\mathbf{\hat{T}}^{*}\) with \(\mathbf{\hat{t}}=(\mathbf{\hat{T}}^{*}-1)^{+}\). We then run the network with our new sample and the promoted \(\mathbf{\hat{t}}\) (using the same condition \(\mathbf{y}\)), and continue in this manner until we reach \(\mathbf{\hat{t}}=0\) for all pixels.
Explicitly, the reverse process is preformed by sampling a Gaussian noise \(\mathbf{\epsilon_{\mathbf{\hat{t}}-1}}\sim\mathcal{N}\left(\mathbf{0},\mathbf{I}\right)\) and computing
\[\mathbf{x_{\mathbf{\hat{t}}-1}}=\frac{\mathbf{\gamma_{\mathbf{\hat{t}}-1}}}{\bm {\gamma_{\mathbf{\hat{t}}}}}\mathbf{x_{\mathbf{\hat{t}}}}+\frac{\mathbf{\eta_{ \mathbf{\hat{t}}}}}{\mathbf{\gamma_{\mathbf{\hat{t}}}}}\mathbf{\hat{x}_{0}}+\sqrt {\frac{\mathbf{\gamma_{\mathbf{\hat{t}}-1}}\mathbf{\eta_{\mathbf{\hat{t}}}}}{\mathbf{ \gamma_{\mathbf{\hat{t}}}}}}\mathbf{\epsilon_{\mathbf{\hat{t}}-1}}, \tag{8}\]
where in \(\mathbf{\hat{t}}-1\) we clip the negative values, and \(\mathbf{\gamma_{\mathbf{\hat{t}}}},\mathbf{\gamma_{\mathbf{\hat{t}}-1}},\mathbf{\eta_{ \mathbf{\hat{t}}}}\) are all vectors of the same dimension as \(\mathbf{x}_{0}\), whose values depend on the initial noise in the image. To avoid further denoising of pixels whose time has reached 0, we override their values after the prediction by the network.
```
Inputs:\(\mathbf{y},\sigma_{r},\sigma_{s}\)
1 Calculate \(\mathbf{\hat{T}}^{*}\) by Eq. (7) Set \(\mathbf{\hat{t}}=\mathbf{\hat{T}}^{*}\), \(\mathbf{x_{\mathbf{\hat{t}}}}=\mathbf{y}\)while\(\operatorname{any}(\mathbf{\hat{t}}>0)\)do
2\(\mathbf{\hat{x}_{0}}=\text{SVNR}\big{(}\mathbf{y},\mathbf{x_{\mathbf{\hat{t}}}}, \mathbf{\hat{t}}\big{)}\) Sample \(\mathbf{x}_{(\mathbf{\hat{t}}-1)^{+}}\) by Eq. (8) Override pixels that will reach \((t-1)^{+}=0\) with the values in \(\mathbf{\hat{x}_{0}}\). These values remain fixed for the rest of the process. Set \(\mathbf{\hat{t}}=(\mathbf{\hat{t}}-1)^{+},\mathbf{x_{\mathbf{\hat{t}}}}= \mathbf{x}_{(\mathbf{\hat{t}}-1)^{+}}\)
```
**Algorithm 1**Training diffusion initialized with \(\mathbf{y}\)
### Noise correlation in the reverse process
Next, we discuss a phenomenon that arises when we initialize the process with the noisy input image _and_ condition the process on it. The key observation is that throughout the reverse diffusion process, there is a correlation between the noise component of the diffusion sample \(\mathbf{x_{\mathbf{\hat{t}}}}\) and the noise component of the condition image \(\mathbf{y}=\mathbf{x_{T^{*}}}\).
When initializing the diffusion process with \(\mathbf{x_{T^{*}}}\), the first reverse step yields a sample \(\mathbf{x_{T^{*}-1}}\) derived from Eq. (5). This sample is less noisy than \(\mathbf{x_{T^{*}}}\) and can be explicitly written (given \(\mathbf{x}_{0}\)) as
\[\mathbf{x_{T^{*}\!-\!1}}\!=\!\frac{\mathbf{\gamma_{\mathbf{T^{*}\!-\!1}}}}{\mathbf{ \gamma_{\mathbf{T^{*}}}}}\mathbf{x_{T^{*}}}+\frac{\mathbf{\eta_{T^{*}}}}{\mathbf{\gamma_ {\mathbf{T^{*}}}}}\mathbf{x}_{0}+\sqrt{\frac{\mathbf{\gamma_{\mathbf{T^{*}\!-\!1}} \mathbf{\eta_{T^{*}}}}}{\mathbf{\gamma_{\mathbf{T^{*}}}}}}\mathbf{\epsilon_{\mathbf{T^{*}\!- \!1}}}. \tag{9}\]
Using Eq. (1) it can be rewritten as a summation of \(\mathbf{x}_{0}\) and an additional noise term, which is a linear combination between the noise \(\mathbf{\epsilon_{\mathbf{T^{*}}}}\) and the new sampled noise term \(\mathbf{\epsilon_{\mathbf{T^{*}\!-\!1}}}\),
\[\mathbf{x_{T^{*}\!-\!1}}=\mathbf{x}_{0}+\frac{\mathbf{\gamma_{\mathbf{T^{*}\!-\!1}}} }{\sqrt{\mathbf{\gamma_{\mathbf{T^{*}\!-\!1}}}}}\mathbf{\epsilon_{\mathbf{T^{*}\!+}}}+ \sqrt{\mathbf{\gamma_{\mathbf{T^{*}\!-\!1}}}\!\!\left(\!1\!-\!\frac{\mathbf{\gamma_{ \mathbf{T^{*}\!-\!1}}}}{\mathbf{\gamma_{\mathbf{T^{*}}}}}\!\right)}\mathbf{\epsilon_{ \mathbf{T^{*}\!-\!1}}}. \tag{10}\]
After \(t_{0}\) inference steps, the time map is \(\mathbf{t}=(\mathbf{T}^{*}-t_{0})^{+}\) and \(\mathbf{x_{t}}\) can be written as
\[\begin{split}\mathbf{x_{t}}&=\mathbf{x}_{0}+\frac{ \boldsymbol{\gamma_{t}}}{\sqrt{\boldsymbol{\gamma_{T^{*}}}}}\boldsymbol{ \epsilon_{T^{*}}}+\sqrt{\boldsymbol{\gamma_{t}}\left(1-\frac{\boldsymbol{ \gamma_{t}}}{\boldsymbol{\gamma_{T^{*}}}}\right)}\boldsymbol{\epsilon_{t}},\\ &=\mathbf{x}_{0}+\sqrt{\boldsymbol{\gamma_{t}}}\tilde{\boldsymbol {\epsilon}_{t}}.\end{split} \tag{11}\]
The full derivation can be found in the supplementary materials. The modified noise \(\tilde{\boldsymbol{\epsilon}_{t}}\) is a linear combination between the initial noise of \(\boldsymbol{\epsilon_{T^{*}}}\) and another i.i.d noise term, \(\boldsymbol{\epsilon_{t}}\),
\[\tilde{\boldsymbol{\epsilon}_{t}}=\sqrt{\frac{\boldsymbol{\gamma_{t}}}{ \boldsymbol{\gamma_{T^{*}}}}\boldsymbol{\epsilon_{T^{*}}}}+\sqrt{1-\frac{ \boldsymbol{\gamma_{t}}}{\boldsymbol{\gamma_{T^{*}}}}}\boldsymbol{\epsilon_{ t}}. \tag{12}\]
This relationship describes the correlation between \(\tilde{\boldsymbol{\epsilon}_{t}}\), the noise component of the diffusion sample \(\mathbf{x_{t}}\), and \(\boldsymbol{\epsilon_{T^{*}}}\), the noise component of the condition image \(\mathbf{y}=\mathbf{x_{T^{*}}}\).
Because of the above correlation, at train time the network sees a different distribution than at inference time. During training, the noise of the diffusion sample \(\mathbf{x_{t}}\) consists entirely of noise sampled independently from \(\boldsymbol{\epsilon_{T^{*}}}\). Hence, at train time, the \(\mathbf{x_{t}}\) and \(\mathbf{y}\) presented to the network are two independent degradations of the true signal \(\mathbf{x}_{0}\). This effect is made clearer when one considers the first step (, \(t_{0}=0\)). While at train time the network sees two independent samples of \(\mathbf{x}_{0}\) noised with \(\boldsymbol{\sigma_{p}}\), at inference time the two images are the same.
Indeed, looking at the progress of inference error in Fig. 3, we see a sudden drop of quality, which can be explained by the fact that the network may be learning to utilize its two uncorrelated inputs, which does not generalize to the inference process.
A naive solution to this problem would be to drop the conditioning entirely, however, our ablation study shows that this yields deteriorated results. The experiments suggest that it stems mainly from the clipping of negative values, which violates the noise model.
Thus, we choose to pursue a different approach and modify the training scheme to explicitly account for this correlation. Specifically, we propose to sample \(\mathbf{x_{t}}\) during training according to Eq. (11), in order to simulate a distribution of inputs that is similar to that of inference time. As noted above, a special case of this noise correlation is when \(t_{0}=0\) and \(\mathbf{y}=\mathbf{x_{T^{*}}}\). We increase the probability of those cases to \(1\%\) of the training iterations.
## 4 Results
We test our method on natural images from the ImageNet dataset [11], corrupted by simulated noise that was generated by our noise model (Eq. (1)). For training we use the full training set of ImageNet, and for evaluation we use a subset of 2000 images from the ImageNet validation set.
We compare our results to a strong diffusion baseline, based on the framework of [32, 30], that was trained to solve the task of image denoising (conditioned on the noisy image), in addition to a state-of-the-art single image denoising method [9]. We report quantitative PSNR, SSIM, LPIPS [40] and FID [14] metrics for all of the models and datasets. While the former three metrics are used to compare pairs of images, the FID metric is used to compare entire distributions. We include this metric to asses the overall similarity between the distribution of the ground truth clean images and the distribution of the denoised results.
### Data and implementation details
Noise simulation:The noise model in Eq. (1) is defined with respect to linear images. Hence, we first "linearize" the images by applying inverse gamma-correction and inverse white level. For white level values, during training we sample a value in the range \([0.1,1]\), and use \(0.5\) during validation.
We train the network on a range of values for \(\sigma_{r},\sigma_{s}\) and evaluate the method on fixed gain levels of an example camera, defined in [20]. Following [26], we consider a wider training region and higher gain levels in our evaluation. See Fig. 4 for the specific values used during training and evaluation.
To make the noisy images more realistic, we further clip the images at \(0\) after the addition of noise, as negative values are not attainable in real sensors. Our network seems to overcome this discrepancy between the theoretical model and the data distribution we use in practice. We do not clip the image at higher values, as it can be adjusted with exposure time. We use crops of \(256\times 256\) for training and a set of \(2000\) images for validation, cropped to the maximum square and resized to \(1024\times 1024\). The noise is added after the resizing, so we do not change the noise distribution.
Implementation details:Before being fed into the network, the input noisy images are scaled to occupy the full range of \([-1,1]\) to match the diffusion models assumption.
Figure 3: SSIM of validation during training. The standard training scheme (light blue) cannot restore the signal. Initializing the diffusion with the noisy image also in training (orange) partially solves the problem, but over time the network utilizes the two realizations of the noise (from the conditioned image and the diffusion sample) that are not available during inference. Our training scheme (purple) that relies on Eq.(11) yields stable training.
The noise standard deviation is scaled accordingly. The input to the network has \(6\) channels: \(3\) RGB channels of the noisy image \(\mathbf{y}\) (condition) and \(3\) RGB channels of the sample in the diffusion process \(\mathbf{x_{t}}\). In addition, the network is also given as input the spatially-varying time map, which is computed from the known noise parameters \(\sigma_{r},\sigma_{s}\). At inference time the sample of the diffusion process is initialized with the noise image \(\mathbf{y}\) and the estimated \(\hat{\mathbf{T}}^{*}\).
We fine-tune a fully-convolutional version of the Imagen model [31], disregarding the text components and conditioning it on the degraded input image, as done in [30, 32]. We use \(\left\{\beta_{t}\right\}_{t=1}^{T}\) that are linearly spaced in the range \([0.02,10^{-8}]\) and \(T=1000\) for the standard diffusion in Eq. (2), and \(\lambda=20\) for the modified noise schedule in Eq. (4). We train the network on 8 TPU-v4 chips, for \(900K\) iterations and follow the training optimization of [31], with Adam optimizer and learning rate scheduler with linear warm-up followed by cosine decay. The training phase takes three days.
### Results on ImageNet
We evaluate our method on a subset of \(2000\) images from the ImageNet dataset [11] and report metrics for noise levels corresponding to gains ranging from \(1\) to \(20\). Note that while the input to the network are "linearized" images, the metrics are calculated on the reprocessed images, _i.e_., after readjusting the white level and reapplying the gamma correction. As mentioned before, we compare our results to a strong diffusion baseline, as well as to HINet, a state-of-the-art single image denoising method [9]. For a fair comparison, we retrain HINet on the same dataset and noise levels that we used. Quantitative results for PSNR, SSIM, LPIPS and FID metrics are reported in Fig. 4, as well as the average runtime per example (in seconds).
Compared to the state-of-the-art model, our method (SVNR) shows slightly worse performance in all "pixel-to-pixel" metrics, while achieving a signifcantly better FID score. On the other hand, the baseline diffusion model outperforms our model in the FID metric but exhibits signifcantly worse results in all other metrics. This nicely demonstrates how our approach balances the perception-distortion trade-off [4]. We can see that the baseline diffusion model favours realistic images at the expense of lower fidelity to the clean signal, while the state-of-the-art model shows the best fidelity to the signal at the cost of drifting away from the input distribution. In contrast, SVNR manages to keep a relatively high signal fidelity without the significant distribution drift.
This can be further seen in Fig. 5 and Fig. 6, where we showcase denoising results of these three models for several inputs with noise gain of \(16\) (comparisons at other noise levels are included in the supplementary). Even at this relatively high noise level, all three models manage to remove most of the noise. However, the results of HINet suffer from considerable over-smoothing and lack high-frequency details. On the other hand, both SVNR and the baseline diffusion models manage to generate fine details. While the baseline diffusion model generally generates more details than SVNR, it eliminates less noise (top example) and furthermore, occasionally exhibits hallucinations (see the first two examples). We hypothesize that this difference between our method and the baseline stems from fine-tuning the baseline to adapt it to our diffusion noise model, Eq. (3). We conjecture that fine-tuning causes the model to lose some of its prior, instead allowing it to make more effective use of the underlying signal, by using the noisy image as the starting point.
Overall, we see that our method yields comparable performance to the state-of-the-art, while producing more realistic images. At the same time, our method retains more fidelity to the underlying signal and removes more noise than the baseline diffusion approach.
Since the diffusion baseline always starts from complete noise, its runtime is fixed (\(\sim\!22\) seconds), regardless of the noise level in the input image. Starting the diffusion process from the noisy image in SVNR yields results in runtime that depends on the noise levels in the image, ranging from \(\sim\!3\) seconds to less than a second for the least noisy images.
Figure 4: Quantitative results for simulated noise across different noise levels. We compare the diffusion baseline, a single image denoising method [9] and our method. The metrics we report are PSNR, SSIM, LPIPS [40] and FID [14]. In addition, average runtimes are presented for the diffusion methods. The noise is simulated using noise model in Eq. (1). During training, the noise parameters are sampled from the blue rectangle. At inference time, we use a set of fixed noise parameters that correspond to various gain levels of an example camera, as described in [20].
Figure 5: Comparison between different denoising methods on images with noise gain of 16.
### Ablation
We validate the importance of different aspects of our approach by the ablation study in Table 1. We compare the results to the baseline diffusion model that is initialized with _complete noise_ and conditioned on the noisy image (denoted A in the table) and to versions where diffusion is initialized with the _noisy input image_ (denoted by B, C). When initializing the diffusion process with the noisy image, we consider unconditioned (B) and conditioned (C) variants.
The _unconditioned_ variants differ in the type of their input images: B1, where the input values are clipped to avoid negative values; and B2, a variant where input images are allowed to have negative values. For the _conditioned_ setup we consider three training schemes: C1, the standard training process, and two versions that try to handle the correlation described in Section 3.5 - C2, a version that enforces the starting point of the diffusion \(\mathbf{x}_{\mathbf{T}^{*}}\) to be equal to the noisy input \(\mathbf{y}\) in \(1\%\) of training iterations; and C3, our full SVNR framework that incorporates Eq. (11). All the ablation experiments are done with gain level 16, and the results are averaged over \(80\) images.
The comparison to the baseline A is discussed in the previous section. The _unconditioned_ version B1 fails to restore the clean signal, mainly because it is not robust to the zero clipped values. When the original noisy image is not available during the process, the prediction of \(\mathbf{x}_{t}\) at each diffusion step is shifted and "loses" the correct intensity levels. This is supported by the comparison with B2.
The standard _conditioned_ version C1 emphasizes the importance of our training scheme that takes into account the
Figure 6: Comparison between different denoising methods on images with noise gain of 16.
correlation between the two sources of noise. In C2, we practically apply Eq. (11) only for the first step of diffusion and only for \(1\%\) of the training iterations (as explained in Section 3.5, this is equivalent to training on samples with \(\mathbf{x_{T^{*}}}=\mathbf{y}\)), which slightly improves the results. However, to achieve good restoration, one must consider the correlation throughout the entire process, which is supported by the improved results achieved by our training scheme C3.
## 5 Conclusions
We have presented a new diffusion-based framework for the task of single image denoising, which leverages the natural rich image prior learned by generative denoising diffusion models. Our framework adapts denoising diffusion to utilize the noisy input image as both the condition and the starting point of the diffusion process. To enable the integration of a realistic noisy image as a sample in the diffusion process, we have proposed a novel denoising diffusion formulation that admits a spatially-variant time embedding, with supporting training and inference schemes.
We believe that this novel formulation can be potentially applied to any non-uniform noise distribution. Additionally, we have addressed a phenomenon that occurs when initializing and conditioning the diffusion process with the same noisy input image, and have mitigated it with a suitable training scheme. Our qualitative and quantitative results show improved handling of the distortion-perception trade-off, balancing faithful image reconstruction with generation of realistic fine details and textures. Furthermore, our formulation also significantly reduces the numer of required diffusion steps. In the future, we aim to further distill the rich knowledge hidden in the backbone model, and expand the scope and applicability of our approach to complex real-world scenarios.
| denoising diffusionモデルは最近、生成タスクで素晴らしい結果を示してきました。巨大なトレーニング画像の豊富な知識を学習することで、これらのモデルは、小さなノイズの処理を順番に重ねることで、完全にノイズからクリーンな自然画像に徐々に変化させることができます。一見すると、シングル画像のノイズ除去に適しているように見えます。しかし、現実的なノイズの除去には、ノイズ除去Diffusionモデルを効果的に適用することが、加算的なホワイトガウスノイズに基づく彼らの構築法に反するため、より複雑な課題となります。この論文では、SVNRという、より現実的な、空間変動のあるノイズモデルを前提としたノイズ除去Diffusionの新しい方法を提案します。SVNRは、ノイズ入力をノイズ除去の出発点として利用することで、ノイズ除去の過程をさらに調整します。この目的のために、各ピクセルに独自の時間の埋め込みを可能にし、その埋め込みを |
2308.11291 | Improving Knot Prediction in Wood Logs with Longitudinal Feature
Propagation | The quality of a wood log in the wood industry depends heavily on the
presence of both outer and inner defects, including inner knots that are a
result of the growth of tree branches. Today, locating the inner knots require
the use of expensive equipment such as X-ray scanners. In this paper, we
address the task of predicting the location of inner defects from the outer
shape of the logs. The dataset is built by extracting both the contours and the
knots with X-ray measurements. We propose to solve this binary segmentation
task by leveraging convolutional recurrent neural networks. Once the neural
network is trained, inference can be performed from the outer shape measured
with cheap devices such as laser profilers. We demonstrate the effectiveness of
our approach on fir and spruce tree species and perform ablation on the
recurrence to demonstrate its importance. | Salim Khazem, Jeremy Fix, Cédric Pradalier | 2023-08-22T09:12:11 | http://arxiv.org/abs/2308.11291v1 | # Improving Knot Prediction in Wood Logs with Longitudinal Feature Propagation
###### Abstract
The quality of a wood log in the wood industry depends heavily on the presence of both outer and inner defects, including inner knots that are a result of the growth of tree branches. Today, locating the inner knots require the use of expensive equipment such as X-ray scanners. In this paper, we address the task of predicting the location of inner defects from the outer shape of the logs. The dataset is built by extracting both the contours and the knots with X-ray measurements. We propose to solve this binary segmentation task by leveraging convolutional recurrent neural networks. Once the neural network is trained, inference can be performed from the outer shape measured with cheap devices such as laser profilers. We demonstrate the effectiveness of our approach on fir and spruce tree species and perform ablation on the recurrence to demonstrate its importance.
Keywords:Knot segmentation Outer-Inter relationship prediction ConvLSTM
## 1 Introduction
Distribution of knots within logs is one of the most important factor in wood processing chain since it determines how the log will be sliced and used. A knot is defined as a piece of a branch that is lodged in a stem and often starts at the stem path. Knots come in various dimensions, shapes and trajectories inside the trunk, these characteristics often depend on tree specie and environmental factors [15]. In wood processing, the knots are considered as defects that affect the quality of logs; hence, detecting their features such as position, size and angle of inclination are relevant and crucial for foresters and sawyers. Knowing these characteristics before the tree processing could generate a relative gain of 15-18% in value of products [2]. Nowadays, internal prediction of tree trunk density from bark observation is a complex and tedious task that requires a lot of human expertise or cannot be performed without expensive X-rays machines. In recent years, with the advent and success of deep learning, convolutional
neural networks have achieved great performances on a variety of tasks such as object detection and image classification due to their strong features extraction capabilities [7, 13]. Compared to traditional methods, the data driven deep learning based approaches learn discriminative characteristics from annotated data automatically instead of human engineering. While the era of deep learning led to significant improvements in several areas of computer vision and natural language processing, there are still only a few paper which study the interest of these approaches for forestry and wood processing industry. This is due to the lack of open data, but also due to the lack of transfer of architectures that have demonstrated their efficiency in computer vision to the specific tasks of the forestry and wood processing industry. In this paper, we propose to explore an original task that does not seem to bear resemblance with a task in another domain: predicting the inner structure from the outer appearance of a wood log. The internal knots of a wood log are a consequence of the growth of branches of the tree and there is therefore, at least for some species, a causality between the presence of an inner knot and the growth or scar of an external branch. As we will demonstrate in the paper, the deformation of the outer surface of the tree, which is the consequence of the presence of branches, allows inferring the location and shape of inner knots. Our experiments are carried on conifers for which there is a clear relationship between the growth of branch and the knots. However, for other species such as deciduous trees, this relationship is unclear, and the task remains challenging.
To solve the task of predicting the inner knots from the outer contour, we consider convolutional neural networks of the encoder-decoder family, where the encoder extracts features for the contour which are then used to decode the presence of a knot as a binary mask. Regularly spaced contour slices of the tree are provided as input to the network. As the presence of a knot is causally linked with a deformation of the contour due to a branch, inferring a knot needs to integrate features from the contour slices further away up or down the tree. To propagate these features between different slices, we consider convolutional LSTM, which are convolutional bidirectional recurrent neural networks [19]. A convolutional recurrent network keeps the spatial structure of the representation and extracts features along the recurrent paths by applying convolutions rather than dense matrix products. This has the benefit of reducing the cost of the net
Figure 1: The recurrent neural network involves a recurrent encoder and feedforward decoder. The context along the slice dimension is propagated with convolutional LSTMs.
work. In our task, this makes sense because a knot progressively diffuses within the wood as one moves along the longitudinal axis of the tree. That progressive diffusion induces that relevant features can be extracted locally, without having to resort to longer range interactions. Finally, given knots have various shapes and diffuses along a varying number of slices, using LSTMs lets the neural network learn how many slices need to be integrated to properly recover the shape of the knot. In summary, the main contribution of our work lies in two parts:
* we propose to address an original machine learning task that is also valuable for the forestry industry, namely, the prediction of inner defects given observations of the outer deformation of a wood log,
* we demonstrate the efficiency of integrating recurrent connections in the segmentation network to solve this task.
The code used for running all the experiments of this paper are available on the following github repository: [https://github.com/jeremyfix/icvs2023](https://github.com/jeremyfix/icvs2023).
## 2 Related Work
**Semantic segmentation** is a fundamental task in computer vision where the goal is to predict the label of each pixel in an image. Deep learning architectures for this task are typically based on the auto-encoder architecture. An autoencoder consists of an encoder and a decoder. The encoder maps the input data to a lower-dimensional latent space representation, while the decoder maps the latent space back to the original input data dimension [20]. In semantic segmentation, the decoder decodes the target labels instead of reconstructing the input. Fully Convolutional Networks (FCN) [14] is an important approach in semantic segmentation and has influenced the design of modern segmentation network. Other refinements of the encoder-decoder structure, such as U-Net and SegNet, have also been proposed in the literature [18, 1].
**Recurrent Neural Networks** have been introduced to deal with sequence data. They can learn the required size of the temporal window to gather the context required for taking a decision at any given time. The difficulty to integrate and propagate information through time, which is the foundation of the fundamental deep learning problem [10] of the vanishing/exploding gradient, has led authors to design dedicated memory units. Representatives of this family are the Long Short Term Memory networks (LSTMs) [6, 8] and Gated Recurrent Units networks (GRUs) [3].
**Convolutional LSTM** preserves the convolutional nature of the data [19]. Indeed, the recurrent weights in the LSTMs involve dense connections and do not exploit the spatial structure of the data they process. Convolution LSTM, by considering convolutional recurrent ways, do preserve the spatial nature of data and reduces the number of parameters required in the recurrent connections. In the original paper, the convolutional LSTMs have been successfully applied to spatio-temporal sequences for weather forecasting.
In our work, we use an encoder-decoder architecture to predict the knot distribution (binary mask) from the slices of contours of the tree. To propagate encoder features through the slices, the encoder involves recurrent connections. In order to keep the convolutional nature of the representations, the encoder involves convolutional LSTM networks. Alternatively, we could have considered a 3D convolutional encoder, but this would have fixed the size of the slice context necessary to form a prediction. Using LSTMs let the network learn which contour features influence which other contour features.
## 3 Methodology
### Data Preprocessing
In order to learn to transform the knot distribution from the contour of trees, we need aligned pairs of contours and knot masks. To build the input and target, we considered the pipelines of [11] for segmenting knots and identifying the contours by using X-rays data. Note that even though the pipelines of [11] are used to acquire data from X-rays images. The main objective of our approach is to avoid X-ray scanners and recover the external geometry from other modalities such as vision camera or laser profilers. The dataset is built from 27 fir trees and 15 spruce trees, with slices every 1.25 mm for tree sections of 1 meter long in average, which makes a total of 30100 slices. Each image is an \(512\times 512\) that is downscaled to \(256\times 256\) for the extraction of the contour and knot segmentation, and further downscaled to \(192\times 192\) for the sequence models presented in this paper. Every tree is sliced in blocks of 40 consecutive slices. In the following of the paper, the axis along which the slices are stacked will be referred as either the longitudinal axis or the z-axis for short. In the experiments, we used 18 fir tree and 8 spruce tree for the training set, we used 4 fir tree and 2 spruce tree for the validation and 5 tree of each specie for the test set. Note that, each tree is represented with by 800 slices.
### Neural network architectures without recurrent connections
We trained two feedforward neural networks based on U-Net [18] and SegNet [1] in order to obtain a baseline to compare with the architecture involving recurrent connections along the z-axis. Although the U-Net and SegNet do not involve recurrent connections, these have been trained on the same data as the recurrent networks, e.g., stacks of slices. This allows to guarantee that training has been performed on the same data and the metrics are computed the same way. The U-Net encoder involves fewer channels than the original network to fit with the input data. The upsampling along the decoder path is performed using a nearest-pixel policy. Along the decoding path, the encoder features are concatenated with the decoder features. The SegNet encoder involves less channels and convolutional layers than the original network. The number of blocks and channels is reduced with respect to the original SegNet because our inputs are smaller.
### Neural network architectures with recurrent connections
In order to propagate the contextual features of the contours in the encoder, we also consider neural network architectures with recurrent connections along the slice dimension (longitudinal axis of the tree). Recurrent connections are implemented with convolutional LSTMs which allow the network to learn which slice is impacting the features of another slice. We remind that the knots within a log can be causally linked to the presence of a branch. Instead of a fully connected LSTM, the convolutional LSTM involves fewer parameters by exploiting the spatial structure of the input data. In this paper, we consider recurrent connections only in the encoder part and not in the decoder part. The rationale is that introducing recurrent connections in the encoder allows the network to propagate contour features through the slices, and our experiments show that this is already sufficient to get good performances. These recurrent connections are bidirectional to allow information to propagate in both directions along the longitudinal axis. For the decoder, we do not add recurrent connections. That could be helpful but at a higher computational cost, and our experiments already demonstrated good performances with recurrent connections only in the encoder. The neural network architecture is depicted on Figure 1.
The recurrent encoder is built from 3 consecutive ConvLSTMs bidirectional blocks. Every block has the same number of memory cells than the size of the spatial dimensions times the channel dimension. The input, output, and forget gates compute their values from a convolution with kernel size 3 from the "previous" sequence index (here, previous is to be considered along the longitudinal z-axis and be either following upward or downward directions given we consider bidirectional LSTMs). We use the same representation depth than for the SegNet with 32, 48 and 64 channels and a maxpooling layer is placed after every ConvLSTM layer to downscale spatially the representation by a factor of 2. The decoder is not recurrent and is the same as for our SegNet, namely 3 consecutive blocks with an upsampling (nearest) followed by a \(2\times[Conv2D(3\times 3)-BatchNorm-ReLU]\) block. The final layer is a \(Conv(1\times 1)\) to output the unnormalized scores for the classification of every pixel.
### Evaluation metrics
Our experiments use different quantitative metrics to evaluate the quality and the performance of our method. For the segmentation task, the ground truth output is usually very sparse and there are much more negatives than positives. Hence, we need to use evaluation metrics that are not biased due to this class imbalance. We used the Dice similarity coefficient (Dice) [5], which is also known as F1-score as overlap metric, the Hausdorff Distance (HD) [9] as distance-based metric, and the Cohen's Kappa \(\kappa\)[4, 17] as counting-based metric to evaluate the segmentation results.
The Hausdorff distance complements the Dice similarity because it indicates if false positives are close to a patch of positives or further away, while the Cohen's Kappa indicates the agreement between ground truth and the prediction.
For each pixel, Cohen's Kappa compares the labels assigned by the model with the ground truth and measures the degree of agreement between them. The Cohen's Kappa ranges from -1 to 1 where a value of 1 indicates perfect agreement between the prediction and ground truth, whereas 0 indicates a prediction which is not better than random guessing and a negative value indicates less agreement than expected by chance. The advantage of using Cohen's Kappa is that it takes into account the possibility of chance agreement and provides a more accurate measure of agreement between prediction and ground truth, this is important in cases where the number of pixels assigned to each class is imbalanced.
For the different equations, we denote FN, FP, TP, TN respectively the number of false negatives, false positives, true positives and true negatives, where \(\hat{y}\) is defined as final prediction computed by thresholding the output probability computed by the network (the threshold is set to 0.5 for all the experiments), and \(y\) the true value to be predicted (a mask, made of either 1 for a pixel belonging to a knot, or 0 otherwise). The metrics are always evaluated on the whole volume of 40 slices. As mentioned in section 3.2, even the feedforward neural networks (SegNet and UNet) are trained on the volumes. Although these networks do not propagate informations throught the longitudinal axis, training and evaluating these networks on the volume allow to have comparable measures (averaged on the same data). The value of the Hausdorff Distance is reported in millimeters. The metrics reported in the result section are averaged over the total number of volumes in the considered fold.
\begin{table}
\begin{tabular}{l||c c} Method & Dice/F1 \(\uparrow\) HD \(\downarrow\) \\ \hline \hline SegNet & 0.68 & 26.18 \\ U-Net & 0.72 & 47.80 \\ \hline ConvLSTM & **0.84** & **17.34** \\ \end{tabular}
\end{table}
Table 1: Left) Comparison of the segmentation methods on Dice score and HD using the validation fold. Right) Results of the SegNet and ConvLSTM models for a Fir tree specie. The first row corresponds to the input images, the second row is the associated ground truth and the bottom ones are the predictions. These samples all belong to the validation fold. Every column corresponds to one of 5 slices from different volumes.
### Other experimental hyperparameters
For all the experiments presented in the paper, the optimization followed the same schedule. The networks have been trained for 150 epochs with a batch size of either 10 for U-Nets and ConvLSTMs, reduced to 4 for SegNet. The parameters have been optimized with Adam [12], a base learning rate of 0.0001. The loss is the binary cross entropy. ConvLSTMs trained for one week, the U-Net and SegNet trained for almost 10 days, using two RTX 3090. The experiments were coded either with Tensorflow 2.45 or Pytorch 1.9. We used Tensorboard6 to track the experiments and log the curves (loss and the different metrics). For regularizing the ConvLSTM encoder-decoder, a dropout layer is inserted between the encoder and decoder parts with a probability of 10% to mask a neuron. Following the original papers of U-Net and SegNet, we did not insert dropout layers in these networks. In all the trainings, data augmentation is applied to the input data with a random rotation out of 8 possible angles, and horizontal flip with a probability of 0.5.
Footnote 5: [https://www.tensorflow.org](https://www.tensorflow.org)
Footnote 6: [https://www.tensorflow.org/tensorboard](https://www.tensorflow.org/tensorboard)
## 4 Results
In this section, we present both quantitatively and qualitatively the performances of the various models on the prediction of knots. The results on the validation fold and test folds are provided respectively in table 1 and table 2.
For all the metrics, the ConvLSTM model performs better than the neural networks without recurrent connections. Looking only at the DICE and HD metrics, it seems that even without the recurrent connections, both the SegNet and U-Net perform reasonably well on the task. However, we observed qualitatively that this is not really the case as several knots are not predicted by these models. In that respect, the kappa metric seems to reflect more the difference in performance between the feedforward and recurrent networks.
Including the context with the recurrent connections in the encoder provides a boost in performance. The quality of the segmentation of the recurrent network is better if we look at the Hausdorff distance, which means that the predicted masks with the ConvLSTM are closer in distance to the ground truth than with the non-recurrent segmentation networks. The Hausdorff distance is given in millimeters, and we remind that the slices are \(192\times 192\) pixels which correspond to \(192\mathrm{mm}\times 192\mathrm{mm}\). Additionally, we computed on the test set the Cohen's Kappa to evaluate the agreement between the predicted masks and the ground truth. The results show that the ConvLSTM achieves a score of 0.41 for fir trees and 0.21 for spruce indicating respectively moderate agreement and fair agreement, while the non-recurrent networks score lower with Kappa values between 0.05 and 0.12 for both species indicating very weak agreement. These findings demonstrate the boost provided by the recurrent networks.
In table 2, right, we provide the metrics of the ConvLSTM model on the different trees of the test fold, either fir or spruce. The metrics computed on individual trees are consistent with the averaged metrics computed over all the volumes and reported in table 2, left. However, some spruce trees are particularly challenging. That's the case for example for the trees 4327 and 4948 which have a really unconventional contours, strongly distorted for some reason unknown to the authors. This out-of-distribution contours probably explains why the model fails to correctly predict all the knots. In addition to these averaged metrics, we provide in Figure 3 the distribution of Cohen's Kappa metric computed on the test fold for both fir and spruce trees, for both the ConvLSTM and SegNet networks. We observe that the ConvLSTM model outperforms the SegNet for all trees for both species, with a clear separation between the distributions. Specifically, the ConvLSTM model achieves nearly a twofold improvement over the SegNet for almost all trees.
As the SegNet performs better on the test set than the U-Net, further comparison will only be made between SegNet and the ConvLSTM network. To better appreciate the difference in segmentation quality between the SegNet and ConvLSTM networks, the prediction masks of both networks on individual slices from different volumes are given in Table 1, right. On this figure, every column is a slice from a different volume of a fir tree and consecutive rows represent the input contour, the ground truth, the prediction of SegNet and the prediction of the ConvLSTM. From these 5 samples, it appears that SegNet usually underestimates knots and sometimes, knots may be even not predicted at all. For the ConvLSTM, most knots are predicted, although the knots might be overestimated in shape.
\begin{table}
\begin{tabular}{l c||c c c} \hline \hline \multirow{2}{*}{Specie} & Tree & \multicolumn{4}{c}{Metrics} \\ & ID & Dice \(\uparrow\) & HD \(\downarrow\) & Kappa \(\uparrow\) \\ \hline \multirow{4}{*}{**Fir**} & 4392 & 0.72 & 14.6 & 0.28 \\ & 4394 & 0.75 & 16.3 & 0.29 \\ & 4396 & 0.78 & 8.0 & 0.52 \\ & 5027 & 0.84 & 6.5 & 0.50 \\ & 5028 & 0.78 & 8.4 & 0.53 \\ \hline \multirow{4}{*}{**Spruce**} & 4327 & 0.70 & 29.0 & 0.12 \\ & 4328 & 0.72 & 19.2 & 0.12 \\ \cline{1-1} & 4329 & 0.73 & 9.1 & 0.25 \\ \cline{1-1} & 4948 & 0.70 & 31.0 & 0.11 \\ \cline{1-1} & 4990 & 0.73 & 13.6 & 0.26 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Left) Comparison of the segmentation methods on Dice, HD and Kappa metrics on the test fold. Right) Quantitative results of the ConvLSTM model for the different trees of the test set. These are the same trees than the ones used for table on the left. The metrics are averaged over all the volumes of the same tree. All the trees had almost the same number of slices (from 800 to 810 slices).
The predictions on some consecutive slices of the same volume of a tree are shown on Figures 2 for respectively a fir tree and a spruce tree. On the fir tree (left), we see part of the branch getting out from the tree, which is the anchor feature from which a network could learn the presence of an inner knot. Indeed, the ConvLSTM seems to be able to propagate information through the slices with its recurrent connections, as it is able to predict the location of a knot on the first of the five slices. It seems unlikely a network could predict the presence of a knot solely based on the first slice, given the deformation of the latter is barely visible on the contour of this first slice.
Figure 3: Distribution of the Kappa metric on the fir and spruce trees of the test fold for both the SegNet and ConvLSTM neural networks. These are the same trees than used in table 2.
Figure 2: Results of the SegNet and ConvLSTM models for a fir tree (left) or spruce tree specie (right) on 5 consecutive slices from the same volume. The first row corresponds to the input contours, the second row is the associated ground truth, and the two bottom rows are the predictions. These slices belong to a tree from the test set.
The figure 4 shows a 3D representation of the contour of a fir tree from the test set, as well as the ground truth and the prediction produced by the ConvLSTM. The full tree represents a set of 803 slices, and all these slices are processed by sequences of 40 slices, with a stride of 1. From this full tree 3d representation, we observe that every knot present in the ground truth is also predicted by the ConvLSTM. It seems also that some knots may not have been correctly labeled as knots in the ground truth. This 3D representation also highlights the consistency of the knot predictions. From this representation, we also better see that there are various types of branch scars, some being clearly visible while others are more like little bumps on the surface of the tree. The smallest scars are certainly the ones for which it is the most challenging for the network to infer the location of knots, but even in some of these difficult cases, we see that the ConvLSTM model succeeds in predicting knots.
## 5 Discussion
In this paper, we investigated a machine learning task that is highly valuable for the forestry industry : predicting the location of inner defects, knots, from the outside appearance of the tree. From the machine learning perspective, this task is original. We addressed this problem by training various neural network architectures of the encoder/decoder family and the most promising tested architectures are the convolutional LSTMs which benefit from recurrent connections along the longitudinal axis of the tree to propagate contour features reflecting the scars of a branch to the slices where the knots must be predicted. Although from the averaged metrics (DICE and Hausdorff), the feedforward networks (SegNet, U-Net) seem to perform well, it turns out that their predictions are pretty bad when we observe them qualitatively. This is not the case for the convolutional LSTM model, which have better metrics and clearly better segmentation of the knots when we check them visually. This discrepancy needs further investigation, and it is unclear why good classification metrics would lead to bad segmentation. The performances of the networks appear to be more contrasted by the Cohen's Kappa.
The data used by the proposed machine learning pipeline relies on the work of [11] that extract contour and inner knots of tree logs from X-ray scans. X-ray
Figure 4: 3D representation of the ground truth (left) and prediction of the ConvLSTM (right) viewed from the side of the tree or the top on both sides. Generated with Paraview.
scans are only essential to produce the targets but are not used by our proposed approach. The required contours for the model can be obtained using laser scanners. We have a work in progress to create a platform with calibrated lasers to extract the contour of a tree log. From a machine learning perspective, the input contours are sparse, but dense representations are used for encoding. There is room for improvement in encoding and decoding methods. Instead of using a binary mask, encoding the contour as a polygon and utilizing graph neural networks for differentiable feature learning could be more efficient. Furthermore, recent research on neural radiance fields [16] suggests the possibility of encoding a 3D volume as a parameterized function, eliminating the need to explicitly construct the 3D volume of knots. Although these ideas require experimentation, a lightweight recurrent encoding of contours that parameterizes a 3D knot density function holds promise.
## Acknowledgment
This research was made possible with the support from the French National Research Agency, in the framework of the project WoodSeer, ANR-19-CE10-011.
| 木材産業における木材の品質は、外側の欠陥と内側の欠陥、特に木製の枝の成長による内側のKnotsによって大きく左右されます。現在、内側のKnotsを特定には、X線スキャナーなどの高価な機器を使用する必要があります。本論文では、木材の外部形状から内側の欠陥の場所を予測するタスクを扱うことにしています。このデータセットは、X線測定によって、外側の形状とKnotsを抽出することにより構築されています。この二値の分割タスクを、Convolutive Recurrent Neural Networkを用いて解決します。ニューラルネットワークが学習されると、レーザープロファイルのような低コストのデバイスを用いて推論を実行できます。本アプローチの効果をヒノキとスプルースの樹種で実証し、再帰性を評価することでその重要性を示します。 |
2310.02919 | Attention-based Multi-task Learning for Base Editor Outcome Prediction | Human genetic diseases often arise from point mutations, emphasizing the
critical need for precise genome editing techniques. Among these, base editing
stands out as it allows targeted alterations at the single nucleotide level.
However, its clinical application is hindered by low editing efficiency and
unintended mutations, necessitating extensive trial-and-error experimentation
in the laboratory. To speed up this process, we present an attention-based
two-stage machine learning model that learns to predict the likelihood of all
possible editing outcomes for a given genomic target sequence. We further
propose a multi-task learning schema to jointly learn multiple base editors
(i.e. variants) at once. Our model's predictions consistently demonstrated a
strong correlation with the actual experimental results on multiple datasets
and base editor variants. These results provide further validation for the
models' capacity to enhance and accelerate the process of refining base editing
designs. | Amina Mollaysa, Ahmed Allam, Michael Krauthammer | 2023-10-04T16:01:06 | http://arxiv.org/abs/2310.02919v2 | # Attention-based Multi-task Learning for Base Editor Outcome Prediction
###### Abstract
Human genetic diseases often arise from point mutations, emphasizing the critical need for precise genome editing techniques. Among these, base editing stands out as it allows targeted alterations at the single nucleotide level. However, its clinical application is hindered by low editing efficiency and unintended mutations, necessitating extensive trial-and-error experimentation in the laboratory. To speed up this process, we present an attention-based two-stage machine learning model that learns to predict the likelihood of all possible editing outcomes for a given genomic target sequence. We further propose a multi-task learning schema to jointly learn multiple base editors (i.e. variants) at once. Our model's predictions consistently demonstrated a strong correlation with the actual experimental results on multiple datasets and base editor variants. These results provide further validation for the models' capacity to enhance and accelerate the process of refining base editing designs.
Machine Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning Multi-task Learning, Multi-task Learning, Multi-task Learning Multi-task Learning, Multi-task Learning Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning Multi-task Learning, Multi-task Learning, Multi-task Learning Multi-task Learning, Multi-task Learning Multi-task Learning, Multi-task Learning Multi-task Learning, Multi-task Learning, Multi-task Learning Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning Multi-task Learning, Multi-task Learning, Multi-task Learning Multi-task Learning, Multi-task Learning, Multi-task Learning Multi-task Learning, Multi-task Learning Multi-task Learning, Multi-task Learning Multi-task Learning, Multi-task Learning Multi-task Learning, Multi-task Learning Multi-task Learning, Multi-task Learning Multi-task Learning, Multi-task Learning Multi-task Learning, Multi-task Learning Multi-task Learning, Multi-task Learning Multi-task Learning, Multi-task Learning Multi-task Learning, Multi-task Learning, Multi-task Learning, Multi-task Learning Multi-task
where we directly learn the probability distribution over all possible outcome sequences for a given target sequence. The second one is a two-stage model where we first estimate the probability of the given target sequence being edited, acknowledging that in many cases, the editor fails and no changes are observed which is often referred to as wild-type outcome. We then proceed to estimate the probability distribution of edited outcomes.
Different editors exhibit varying behaviors on the same target sequences due to factors like binding affinities and editing window sizes, introducing distributional shifts. In response to this challenge, we introduce a multi-task learning framework. Rather than training individual models for each editor, as current models do, we propose a unified model capable of simultaneously accommodating multiple editors.
In this work, we study the different modeling strategies for training machine learning models for the base editor outcome prediction task. We explore the spectrum of modeling choices evaluated on multiple datasets and base editors. A key highlight is the proposed unified multi-task model that is capable of learning from various base editors without necessitating training separate models for each setup. We train our models on six libraries corresponding to the outcomes of six base editors applied on thousands of target sites (Table 2). Our models' predictions show a good correlation with the ground truth across all datasets demonstrating the potential of machine learning in guiding and exploring genome editing space.
## 2 Related Work
In recent years, the intersection of deep learning and CRISPR-Cas9 systems has witnessed substantial interest from the bioinformatics community. Researchers have explored the applications of deep learning in predicting various aspects of CRISPR-Cas9 systems, including predicting gRNA activities (Amenen et al., 2021; Xie et al., 2023; Zhang et al., 2021) and editing outcomes for both base editing and prime editing scenarios (Mathis et al., 2023).
Among those, one notable approach is the BE-Hive proposed by (Arbab et al., 2020), which aims to predict base editing outcomes and efficiencies while considering sequence context, PAM compatibility, and cell-type-specific factors. The model employs a gradient boosting tree for predicting overall editing efficiency and a deep conditional autoregressive model for predicting probability of edited outcome sequences (denoted by bystander efficiency). Similarly, (Song et al., 2020) presented DeepABE and DeepCBE, that is based on convolutional neural networks to model both overall editing efficiency and bystander efficiency of adenine and cytosine base editors.
Recently, Marquart et al. (2021) proposed BE-DICT, which predicts per-base editing efficiency (i.e. editing efficiency of each target base in a sequence) and bystander base-editing efficiency using attention-based deep learning. In a latest comprehensive study, (Kim et al., 2023) developed DeepCas9variants and DeepBEs to predict editing efficiencies and outcomes of various BEs, taking into account different Cas9 variants. They build on and adapt the models proposed in (Song et al., 2020) (i.e. convolutional networks) to generate predictions for a range of CRISPR-Cas9 systems.
While the surge of interest in applying machine learning to CRISPR-Cas9 systems is clear in recent literature, it's noteworthy that many of these works have a primary emphasis on designing CRISPR-Cas9 systems under various conditions and less focused on the analysis of ML models without offering a holistic and systematic analysis of model design. Given the intricate nature of CRISPR-Cas9 systems and the multitude of model paradigms adopted, deriving concrete conclusions about optimal model design strategies remains elusive. In this context, our work aims to serve as model-first work that presents the base editing outcome prediction through a modeling lens. We focus on model development and provide a systematic analysis of each component of the models, providing a structured framework for problem formulation and model design specifically tailored to the prediction of base editing outcomes. Through this structured examination of these critical aspects, our aim is to lay the groundwork for more informed and refined approaches for using deep learning models to assist the design of base editors.
## 3 Method
Base editor and related conceptsBase editors (BEs) are created by fusing the Cas9 protein with DNA-modifying enzymes. They are directed by a 20-base pair guiding RNA molecule (sgRNA) that acts as a GPS to locate and bind to a matching DNA segment known as the _protospacer_. The effectiveness of BEs largely depends on the composition of this protospacer sequence. BEs, in tandem with the sgRNA, can only bind to the DNA if there's a _protospacer adjacent motif_ (PAM) - a sequence consisting of 2-6 nucleotides - present adjacent to the protospacer. This PAM sequence further influences the activity of BEs. There are two primary types of base editors: adenine base editors (ABEs) (presented in figure 1), which convert adenine (A) to guanine (G), and cytosine base editors (CBEs) that chemically convert cytosine (C) to thymine (T). A detailed description of the base editor is provided in the appendix section 6.1.1.
### Data representation
Assume we have a target (reference) DNA sequence denoted as \(\mathbf{x}_{\text{ref}}=[x_{1},x_{2},\ldots,x_{T}]\) where \(x_{i}\in\{A,C,G,T\}\), and a set of DNA sequences \(\mathbf{X}_{\text{out}}=[\mathbf{x}_{\text{out},1},\mathbf{x}_{\text{out},2}, \ldots,\mathbf{x}_{\text{out},M}]\in\mathbb{R}^{T}\). The _target_ of the DNA sequence \(\mathbf{x}_{\text{out},1}\) is the target of the DNA sequence \(\mathbf{x}_{\text{out},M}\), and the target of the DNA sequence \(\mathbf{x}_{\text{out},M}\) is the target of the DNA sequence \(\mathbf{x}_{\text{out},M}\). The target of the DNA sequence \(\mathbf{x}_{\text{out},M}\) is the target of the DNA sequence \(\mathbf{x}_{\text{out},M}\).
\(\mathbb{R}^{M\times T}\) representing corresponding outcomes when a specific base editor is applied to the reference sequence \(\mathbf{x}_{\text{ref}}\). The associated probabilities for these outcomes are given by \(\mathbf{y}=[y_{1},y_{2},\dots,y_{M}]\) where \(y_{i}=P(\mathbf{x}_{\text{out},i}|\mathbf{x}_{\text{ref}})\in[0,1]\), _for_\(i=1,2,\dots,M\), indicating the likelihood of obtaining outcome \(\mathbf{x}_{\text{out},i}\) through editing of \(\mathbf{x}_{\text{ref}}\). Here, \(T\) is the length of the reference sequence, and \(M\) represents the total number of possible outcomes for a given reference sequence. The count of outcomes can vary depending on the reference sequence. An example of a reference sequence and associated outcome sequences is represented in Figure 2.
In this paper, we use bold uppercase letters for matrices (\(\mathbf{X}\)), bold lowercase letters for vectors or sequences (\(\mathbf{x}\)), and regular non-bold letters for scalars or tokens. We use \(P\) for probability distributions and non-bold uppercase letters (\(X\)) for random variables. To represent the reference sequence, we consider protospacer, PAM, and overhangs. Here, "overhangs" refer to adjacent nucleotides on both sides of the protospacer.. To declutter the notation, we will mainly use \(\mathbf{x}_{\text{ref}}\) to denote the reference sequence which could refer to one of these representations: (a) protospacer, (b) protospacer + PAM, or a (c) left overhangs + protospacer + PAM + right overhangs where + is the concatenation operator. Respectively, the outcome sequences are the DNA sequences with the same length as the reference sequence and with a modification of the target bases at the protospacer. The outcome sequence identical to the reference sequence (no edits) is referred as the wild-type.
The training dataset comprises \(N\) pairs, each containing a reference sequence, its associated outcomes, and the corresponding probabilities, denoted as \(D=\{\mathbf{x}_{\text{ref}}^{i},\mathbf{X}_{\text{out}}^{i},\mathbf{y}^{i}\}_ {i=1}^{N}\). For simplicity, when referring to a specific reference sequence and its outputs, we omit the instance-level indexing and use only \(\mathbf{x}_{\text{ref}}\).
### Problem formulation
Our objective is to predict the likelihood of potential outcomes resulting from a specific base editor applied to a reference sequence.One approach would be formulating it as a generative model where we directly model the condition distribution \(P(X_{\text{out}}|\mathbf{x}_{\text{ref}})\) that we can both sample different outcomes for a given reference sequence and calculate the probability of each outcome. However, unlike typical generative models that must learn to generate entire output sequences, our scenario benefits from already knowing a portion of the output sequences. Due to the base editor's specific targeting of A-to-G or C-to-T transformations, a substantial portion of the output sequence remains consistent with the reference sequence, with only a few positions undergoing alteration.
In the inference phase, for a given reference sequence, we can efficiently generate all possible outcomes by considering only the edit combination of target bases (A/G) within the protospacer. By traversing through a range of possible edits, we cover the entire landscape of potential outcome sequences. Therefore, we only need to learn the distribution \(P(X_{\text{out}}|\mathbf{x}_{\text{ref}})\) such that we can evaluate the probability of a specific outcome for a given reference sequence \(P(X_{\text{out}}=\mathbf{x}_{\text{out},i}|\mathbf{x}_{\text{ref}})\).
One-stage ModelIn this setup, we tackle the problem by learning a function \(f(\mathbf{x}_{\text{ref}},\mathbf{x}_{\text{out},i})\rightarrow\hat{y}_{i}\) where \(i=1,\dots,M\), and \(\sum_{i=1}^{M}\hat{y}_{i}=1\), that takes as input the reference sequence and one of its corresponding outcome and learns to approximate the probability of obtaining that specific outcome. Notably, this function \(f\) characterizes a categorical distribution \(P(X_{\text{out}}=\mathbf{x}_{\text{out},i}|\mathbf{x}_{\text{ref}})\sim Cat(M, \hat{\mathbf{y}})\), where \(\hat{\mathbf{y}}\) is the vector containing probabilities for M outcomes. To learn the function \(f\), we propose to use attention-based encoder blocks to learn the encoding of both the
Figure 1: Adenine base editor
Figure 2: An example of a reference sequence of 20 bases (i.e. nucleotides) and associated outcome sequences when applying ABEmax base editor. The first row represents the reference (target) sequence, and the second row is the outcome sequence with no modification (i.e. wild-type) with a probability of occurrence of 0.52. The third row represents a possible outcome sequence where the letter A is changed to G at position 5 with a probability of 0.35. The rest of the rows represent all possible changes of the reference sequence targeting letters A to G with their associated probabilities.
reference sequence and output sequence. Subsequently, we apply a prediction model on the learned encoded representation to output the probability of obtaining the outcome. The network architecture to learn \(f\) is reported in figure 3 (B: proportion model). However, there is a relatively higher probability often associated with the wild-type outcome (\(\mathbf{x}_{\text{out},i}=\mathbf{x}_{\text{ref}}\)), while the probabilities linked to the edited outcome sequences are often very small. This situation presents a challenge when directly modeling \(P(X_{\text{out}}|x_{\text{ref}})\)--as the model might easily learn the wild-type probability but struggle with outcomes that have extremely low probabilities.
### Two-stage model
To address this, we propose a two-stage model where we break down \(P(X_{\text{out}}|\mathbf{x}_{\text{ref}})\) as the product of two probabilities:
\[P(\mathbf{x}_{\text{out},i}|\mathbf{x}_{\text{ref}})=\begin{cases}P(\mathbf{x} _{\text{out},i}|\mathbf{x}_{\text{ref}},\text{edited})P(\text{edited}|\mathbf{ x}_{\text{ref}}),\\ \text{if }\mathbf{x}_{\text{out},i}\neq\mathbf{x}_{\text{ref}}\\ 1-P(\text{edited}|\mathbf{x}_{\text{ref}}),\text{if }\mathbf{x}_{\text{out},i}= \mathbf{x}_{\text{ref}}\end{cases} \tag{1}\]
For a given reference sequence, we first predict the probability of overall efficiency which is defined in Eq. 2. It provides the probability of the target sequence being edited, \(P(edited|\mathbf{x}_{\text{ref}})\), which in turn gives the probability of the wild-type. Next, we predict the probability of all possible edited outcomes, \(P(\mathbf{x}_{\text{out},i}|\mathbf{x}_{\text{ref}},edited)\). We refer to the first as _overall efficiency_ and the second as _proportion_
\[P(edited|\mathbf{x}_{\text{ref}})=\frac{\text{Sum of the read count of all edited reads for the target}}{\text{Total read count of the target sequence}} \tag{2}\]
We estimate the overall efficiency of the given reference sequence using \(f_{\theta_{1}}(\mathbf{x}_{\text{ref}})\) (Eq. 3), denoted by the overall efficiency model, and subsequently, we predict the conditional probabilities of all non wild-type outcomes using \(f_{\theta_{2}}(\mathbf{x}_{\text{ref}},\mathbf{x}_{\text{out},i})\) (Eq. 4) which we denote by the proportion model.
\[f_{\theta_{1}}(\mathbf{x}_{\text{ref}})=P(edited|\mathbf{x}_{\text{ref}}) \tag{3}\]
where \(P(\text{\emph{wild-type}}|\mathbf{x}_{\text{ref}})=1-P(edited|\mathbf{x}_{ \text{ref}})\)
\[f_{\theta_{2}}(\mathbf{x}_{\text{ref}},\mathbf{x}_{\text{out},i})=P(\mathbf{ x}_{out,i}|\mathbf{x}_{\text{ref}},\text{\emph{edited}}), \tag{4}\]
where \(\mathbf{x}_{\text{out},i}\neq\mathbf{x}_{\text{ref}}\)
Once \(f_{\theta_{1}}\) and \(f_{\theta_{2}}\) are learned, we can calculate \(P(X=\mathbf{x}_{\text{out},i}|\mathbf{x}_{\text{ref}})\) where \(i=1,\ldots M\) for all outcome sequences, including wild-type and edited sequences using Eq 1.
#### 3.3.1 Overall efficiency model
We formulate the overall efficiency model as a probabilistic classification task where \(f_{\theta_{1}}\) parameterizes a binomial distribution \(P(C|\mathbf{x}_{\text{ref}})\) of a random variable \(C\in\{\textit{edited},\textit{not edited}\}\) with the aim to learn to output the \(P(C=edited|\mathbf{x}_{\text{ref}})\) for a given reference sequence. To learn \(f_{\theta_{1}}\), we first computed the overall editing efficiency for each reference sequence by summing all probabilities attributed to the non wild-type outcomes as given in Eq 2, or equivalently, \(1-P(\textit{wild-type}|\mathbf{x}_{ref})\). Then we use multiple 1D-Convolutional layers (LeCun et al., 1995) on the one-hot-encoded representation of \(\mathbf{x}_{ref}\) to learn discriminative feature embedding that is passed to the multi-layer perceptron (MLP) layer to approximate the distribution \(P(C|\mathbf{x}_{\text{ref}})\). The model architecture is presented in Figure 3 (A). We trained \(f_{\theta_{1}}\) using KL-divergence loss that is applied on the true distribution \(P(C|\mathbf{x}_{\text{ref}})\) and learned distribution \(\hat{P}_{\theta_{1}}(C|\mathbf{x}_{\text{ref}})\) for each reference sequence.
\[\mathcal{L}_{\textit{efficiency}}(\theta_{1},D)=\sum_{i=1}^{N}D_{kl}(P(C| \mathbf{x}_{\text{ref}}^{i})\|\hat{P}_{\theta_{1}}(C|\mathbf{x}_{\text{ref}}^ {i})) \tag{5}\]
#### 3.3.2 Proportion model
This model is designed to approximate the conditional distribution \(P(X_{\text{out}}|\mathbf{x}_{\text{ref}},\textit{edited})\). To achieve this, we first remove the wild-type from each reference sequence's corresponding output \(X_{\text{out}}\). Then, we normalize the probabil
Figure 3: Two-stage Model overview
ities of the remaining outcomes to ensure a valid distribution effectively converting \(P(X_{\text{out}}|\mathbf{x}_{\text{ref}})\) into the distribution \(P(X_{\text{out}}|\mathbf{x}_{\text{ref}},\textit{edited})\). The proportion model \(f_{\theta_{2}}\) is designed to learn the parameters governing the distribution \(P(X_{\text{out}}|\mathbf{x}_{\text{ref}},\textit{edited})\). Similar to the one-stage model, \(f_{\theta_{2}}\) is provided with both the reference sequence \(\mathbf{x}_{\text{ref}}\) and its associated outcome sequence \(\mathbf{x}_{\text{out},i}\). The model is then trained to estimate the likelihood \(P(\mathbf{x}_{\text{out},i}\mid\mathbf{x}_{\text{ref}},\textit{edited})\), representing the probability of reference sequence being edited, and result in the outcome sequence \(\mathbf{x}_{\text{out},i}\).
As illustrated in Figure 3 (B), \(f_{\theta_{2}}\) uses attention-based model comprised of two encoder networks, \(\text{Enc}^{1}(\mathbf{x}_{\text{ref}})\), \(\text{Enc}^{2}(\mathbf{x}_{\text{out}})\), and one output network \(g\). The design of the encoder networks adapts the transformer encoder blocks architecture (Vaswani et al., 2017), characterized by multiple layers of multi-head self-attention modules. The two encoder networks process the reference sequence and one of its corresponding output sequence \(\mathbf{x}_{\text{out},i}\), leading to the extraction of their respective latent representations, namely \(\mathbf{Z}_{\text{ref}}\in\mathbb{R}^{T\times d}\) and \(\mathbf{Z}_{\text{out}}\in\mathbb{R}^{T\times d}\). Both vectors are then concatenated to form a unified learned representation \(\mathbf{Z}\in\mathbb{R}^{T\times 2d}\). Subsequently, the output network \(g\) embeds this unified representation \(\mathbf{Z}\) to compute the probability of obtaining the output sequence given the reference sequence, \(P(\mathbf{x}_{\text{out},i}\mid\mathbf{x}_{\text{ref}},\textit{edited})\).
Precisely, the output network \(g(\mathbf{Z})\) takes as input the final representation \(\mathbf{Z}\in\mathbb{R}^{T\times 2d}\) and performs an affine transformation followed by softmax operation to compute the probability of conversion of every target base (i.e. base A or C depending on the chosen base editor) as it is shown below:
\[\hat{y}_{it}=\sigma(\mathbf{W}\mathbf{z}_{it}+\mathbf{b}_{t}) \tag{6}\]
where \(\mathbf{W}\in\mathbb{R}^{2\times 2d}\), \(\mathbf{b}_{t}\in\mathbb{R}^{2}\) and \(\sigma\) is softmax function. \(\hat{y}_{it}\) represents the probability of editing occurring at the \(t\)-th position in the \(i\)-th outcome sequence. The un-normalized probability for the whole \(i\)-th output sequence \(\mathbf{x}_{\text{out},i}\) given its reference sequence is computed by \(\hat{y}_{i}=\prod_{t=1}^{T}\hat{y}_{i,t}\), which is then normalized across all the outcomes to make it valid probability distribution (Eq. 7). Therefore, the approximated probability for obtaining \(i\)-th edited (non-wild type) outcome sequence is given by:
\[\hat{P}(\mathbf{x}_{\text{out},i}\mid\mathbf{x}_{\text{ref}},\textit{edited} )=\frac{\hat{y}_{i}}{\sum_{i=1}^{M}\hat{y}_{i}} \tag{7}\]
Objective FunctionWe used the Kullback-Leibler (KL) divergence on the model's estimated distribution over all outcome sequences for a given reference sequence \(\mathbf{x}_{\text{ref}}^{i}\) and the actual distribution:
\[D_{\text{KL}}^{i}(P(X_{\text{out}}|\mathbf{x}_{\text{ref}}^{i}, \textit{edited})||\hat{P}_{\theta_{2}}(X_{\text{out}}|\mathbf{x}_{\text{ref}}^ {i},\textit{edited})) \tag{8}\] \[=\sum_{j=1}^{M_{i}}P(\mathbf{x}_{\text{out},j}|\mathbf{x}_{\text{ ref}}^{i},\textit{edited})\log\frac{P(\mathbf{x}_{\text{out},j}|\mathbf{x}_{ \text{ref}}^{i},\textit{edited})}{\hat{P}_{\theta_{2}}(\mathbf{x}_{\text{out},j }|\mathbf{x}_{\text{ref}}^{i},\textit{edited})}\]
Lastly, the objective function for the whole training set is defined by the average loss across all the reference sequences as follows:
\[\mathcal{L}_{\text{proportion}}(\theta_{\mathbf{2}};D)= \tag{9}\] \[\sum_{i=1}^{N}D_{KL}^{i}(P(X_{\text{out}}\mid\mathbf{x}_{\text{ ref}}^{i},\textit{edited})||\hat{P}_{\theta_{2}}(X_{\text{out}}\mid\mathbf{x}_{ \text{ref}}^{i},\textit{edited})\]
The final objective is composed of both the overall efficiency model loss and the proportion model loss with a weight regularization term (i.e. \(l_{2}\)-norm regularization) applied to the model parameters represented by \(\theta=\{\theta_{\mathbf{1}},\theta_{\mathbf{2}}\}\) (Eq. 10)
\[\mathcal{L}_{\text{proportion}}(\theta_{\mathbf{1}};D)+\mathcal{L}_{\text{ efficiency}}(\theta_{\mathbf{2}},D)+\frac{\lambda}{2}\|\theta\|_{2}^{2} \tag{10}\]
### Multi-task learning with multiple base editors
Multi-task learningThere is a diverse set of base editors, each distinguished by its unique design attributes. These distinctions, including variations in binding affinities, editing window sizes, and deaminase activities, result in differing editing efficiencies even when targeting the same sequence. The variations in editing efficiency across different editors emphasize the complexity of the base editing landscape. Conventional approaches have often proposed training separate models for each editor. However, this approach not only demands additional effort but also fails to leverage the shared structural similarities among editors. To leverage common patterns and relationships present across various libraries derived from different base editors, and optimize predictive capability while also reducing computational time, we propose a more efficient solution based on multi-task learning. Instead of training separate models for each editor, we train a single model capable of predicting the efficiency of all editors when applied to the same reference sequence.
Given a total number of \(D\) editors where each editor has its own dataset \(B_{i}\) (denoted by screening libraries), we developed a multi-task learning model that uses shared encoding
Figure 4: Multi-task learning model overview
layers to extract a common representation across all the libraries as well as individual branches that fine-tune the model specifically for each library, ensuring a better fit to their respective data. This approach implicitly models \(P(X_{\text{out}}\mid\mathbf{x}_{\text{ref}},B_{i})\) where \(B_{i}\) represents the base editor type applied on the reference sequence. To implement one universal model across all datasets, we extend our proposed two-stage model architecture (illustrated in Figure 3) for multi-task learning, as depicted in Figure 4.
Specifically, we modify the overall efficiency model by initially employing two convolutional layers as a shared building block across all datasets/editors, enabling the learning of a common representation for the reference sequence. Then a set of output blcoks is used to represent editor-specific transformation. Each editor type has its own output block consisting of two layers of convolutional network followed by MLP layers to predict the probability \(P(edited|\mathbf{x}_{\text{ref}})\) for each editor/dataset accordingly.
We adapt the proportion model by using a common encoder network across all editors/datasets to establish a unified representation \(\mathbf{Z}_{\text{ref}}\) for the reference sequence while using separate encoders and output blocks for each distinct editor. To counterbalance any bias towards larger datasets, we implemented a data loader that uniformly samples the same number of data samples in each mini-batch throughout the training phase.
## 4 Experiments
### Dataset and Experiment setup
DatasetTo comprehensively assess base editors' efficiency across thousands of genomic sequences, we conducted high-throughput screening, resulting in the creation of six distinct datasets. Each dataset corresponds to the application of one of the following base editors: SpRY-ABESe, SpCas9-ABESe SpG-ABE8e, SpRY-ABEmax, SpCas9-ABEmax, and SpG-ABEmax, as listed in Table 1. Detailed descriptions of the used editors are provided in Appendix Section 6.2. Each dataset encompasses approximately 11,000 reference sequences and their corresponding output sequences. In each dataset, we leveraged 193 distinct PAM sites, each comprising four nucleotide bases.
Experiment setupWe divided every dataset into training, testing, and validation sets, maintaining a ratio of 80%, 10%, and 10%. This procedure is repeated three times to ensure robust performance reporting. All reported results are based on the average performance over the three runs(indicated by \(mean\pm std\)). First, we use the one-stage model to identify the best features to represent reference sequence for predicting the base editing outcomes (i.e. determine reference sequence representation option as explained in section 3.2). Using the selected features (i.e., protospacer + PAM), we proceed to evaluate the performance between the one-stage and two-stage models. Finally, using the two-stage model, we compare the multi-task learning (i.e. a unified model training for all editors) to the single-task learning setup where separate models are trained for the different editors.
Throughout model training, we track the epoch at which the best validation scores are attained. Evaluation of the trained models for each base editor is based on their average performance on the test sets across the three runs. Pearson and Spearman correlation were used as performance measures for all tested models. More details about the network structure, optimization, and hyperparameters are presented in the Appendix Section 6.3.
### Experiment results
Reference sequence representationExisting models have explored different factors that could affect the base editor's efficiency, which we categorize into three scenarios: 1) the protospacer, 2) the protospacer along with its PAM, and 3) an extended range including left overhangs, protospacer, PAM, and right overhangs. We investigate all three scenarios with the one-stage model to identify the best features to represent the reference sequence. As shown in Table 2, we observe that incorporating PAM information significantly enhances performance, whereas the inclusion of overhangs demonstrates minimal impact. Besides, adding overhangs increases the computational complexity drastically. Consequently, we opt to employ protospacer and PAM information to represent reference sequences in all the subsequent model results presented below.
Comparing One-stage with Two-stage ModelAs detailed in Section 3.2, our model can be conceptualized as either a one-stage model, directly capturing the distribution across all potential outcomes for a given reference, or as a two-stage model. The latter approach involves initially
\begin{table}
\begin{tabular}{l c c c c c} \hline Editor & \#ins & \#refseq & \#outcome & mean & std \\ \hline SpRY-ABESe & 110141 & 11291 & 9.7 & 0.102 & 0.211 \\ SpCas9-ABESe & 43054 & 11337 & 4.6 & 0.217 & 0.323 \\ SpG-ABESe & 80873 & 11307 & 7.1 & 0.139 & 0.263 \\ SpRY-ABEmax & 70851 & 11347 & 6.2 & 0.159 & 0.301 \\ SpCas9-ABEmax & 39606 & 11302 & 3.5 & 0.285 & 0.417 \\ SpG-ABEmax & 70851 & 11347 & 6.2 & 0.159 & 0.301 \\ \hline \end{tabular}
\end{table}
Table 1: Data statistics: “#ins” refers to the number of reference and output sequence pairs, “#refseq” denotes the number of distinct reference sequences, “#outcome” denotes the average number of outcomes per reference sequence, the mean and std refers to the mean and standard deviation of the probability across all the outcomes.
predicting the probability of an edit occurring in the reference sequence, followed by predicting the probabilities of individual edited outcomes. In this section, we present results for both models to illustrate the advantages of the two-stage approach over the one-stage counterpart. For the one-stage model, we use exactly the same architecture as the proportion model from the two-stage model on the original data without preprocessing where we remove the wild type and renormalize the probability for each reference.
Table 3 shows the two-stage model has slightly superior results (Spearman correlation) over the one-stage model. This improvement can be attributed to the model's two-step prediction approach, which first predicts the wild-type alone and subsequently refines predictions for various edited outcomes. To better understand the difference between the two models' ability to predict the wild-type and edited outcome sequences, we rigorously evaluated each model's performance separately on both types of outcome. The two-stage model outperforms the one-stage model in most of the datasets when considering both wild type and edited outcomes as presented in Table 4 and 5.
Multi-task learningGiven this conclusion, we proceed with the multi-task learning with the two-stage mode (see Figure 4). We compared the performance of multi-task learning across all the datasets/editors with a single-task setup where we trained one model per dataset/editor. Table 6 reports similar performance for both models. Although there wasn't a substantial performance difference, adopting a unified multi-task model offers advantages such as reduced run-time (for training and inference) and smaller model size (fewer parameters) while maintaining consistent performance across all datasets. Moreover, with a unified model, we can simultaneously predict the editing outcomes of all six editors at once for a given target sequence.
Comparing to baselines in the literatureWe also compared our model with BE-DICT (Marquart et al., 2021) which is one of the relevant existing models that tackle base editing outcome prediction. BE-DICT is a sequence-to-sequence model where the decoding happens in an auto-regressive manner, it is computationally heavy compared to our proposed method. Moreover, it is trained as single-task model (i.e. one model for each editor) and use only the protospacer to represent the target sequence. We extended and retrained BE-DICT on two of the datasets (randomly chosen) and compared the prediction results with ours. For a fair comparison, we first used our one-stage model trained in the single task setting (one model per dataset) using only the protospacer. The results of this experiment reveal the advantages of the architectural changes, particu
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multicolumn{2}{c}{Single task learning} & \multicolumn{2}{c}{Multi task learning} \\ Libraries & Spearman & Pearson & Spearman & Pearson \\ \hline SpKP-ABEmax & \(0.872\pm 0.001\) & \(0.986\pm 0.001\) & \(0.872\pm 0.002\) & \(0.986\pm 0.0002\) \\ SpCap-ABEmax & \(0.879\pm 0.004\) & \(0.989\pm 0.001\) & \(0.864\pm 0.0019\) & \(0.992\pm 0.0001\) \\ SpCap-ABEmax & \(0.882\pm 0.001\) & \(0.991\pm 0.0006\) & \(0.889\pm 0.0016\) & \(0.992\pm 0.0004\) \\ SpKP-ABEbe & \(0.861\pm 0.0029\) & \(0.974\pm 0.001\) & \(0.863\pm 0.0011\) & \(0.975\pm 0.001\) \\ SpCap-ABEbe & \(0.856\pm 0.008\) & \(0.938\pm 0.0005\) & \(0.852\pm 0.002\) & \(0.937\pm 0.003\) \\ SpG-ABEbe & \(0.856\pm 0.004\) & \(0.980\pm 0.0008\) & \(0.871\pm 0.003\) & \(0.979\pm 0.001\) \\ \hline \hline \end{tabular}
\end{table}
Table 6: Performance comparison between the multi-task and single task learning models
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multicolumn{2}{c}{} & \multicolumn{2}{c}{One-stage Model} & \multicolumn{2}{c}{Two-stage Model} \\ Libraries & Spearman & Pearson & Spearman & Pearson \\ \hline SpKP-ABEmax & \(0.745\pm 0.015\) & \(0.711\pm 0.011\) & \(0.798\pm 0.007\) & \(0.872\pm 0.012\) \\ SpCap-ABEmax & \(0.82\pm 0.003\) & \(0.851\pm 0.014\) & \(0.838\pm 0.009\) & \(0.890\pm 0.030\) \\ SpG-ABEmax & \(0.807\pm 0.003\) & \(0.752\pm 0.014\) & \(0.845\pm 0.011\) & \(0.822\pm 0.014\) \\ SpKP-ABEbe & \(0.393\pm 0.021\) & \(0.508\pm 0.025\) & \(0.547\pm 0.056\) & \(0.699\pm 0.051\) \\ SpCap-ABEbe & \(0.855\pm 0.007\) & \(0.840\pm 0.003\) & \(0.806\pm 0.002\) & \(0.858\pm 0.0021\) \\ SpCap-ABEbe & \(0.712\pm 0.002\) & \(0.732\pm 0.004\) & \(0.774\pm 0.005\) & \(0.810\pm 0.009\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Prediction performance all outcomes (i.e. including wild-type sequences).
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multicolumn{2}{c}{} & \multicolumn{2}{c}{Protospacer \& PAM} & \multicolumn{2}{c}{Protospacer \& PAM \& Overhang} \\ Libraries & Spearman & Pearson & Spearman & Pearson & Pearson \\ \hline SpKP-ABEmax & \(0.835\pm 0.007\) & \(0.981\pm 0.001\) & \(0.854\pm 0.006\) & \(0.983\pm 0.001\) & \(0.854\pm 0.003\) & \(0.983\pm 0.002\) \\ SpCap89-ABEmax & \(0.786\pm 0.003\) & \(0.978\pm 0.002\) & \(0.881\pm 0.001\) & \(0.989\pm 0.0005\) & \(0.891\pm 0.002\) & \(0.989\pm 0.001\) \\ SpG-ABEmax & \(0.841\pm 0.002\) & \(0.985\pm 0.0007\) & \(0.866\pm 0.004\) & \(0.989\pm 0.0003\) & \(0.878\pm 0.008\) & \(0.991\pm 0.0009\) \\ SpRY-ABEbe & \(0.776\pm 0.019\) & \(0.965\pm 0.001\) & \(0.779\pm 0.0036\) & \(0.968\pm 0.002\) & \(0.803\pm 0.008\) & \(0.967\pm 0.0003\) \\ SpCas9-ABEbe & \(0.762\pm 0.007\) & \(0.883\pm 0.005\) & \(0.857\pm 0.007\) & \(0.945\pm 0.0006\) & \(0.862\pm 0.003\) & \(0.945\pm 0.003\) \\ SpG-ABEbe & \(0.803\pm 0.005\) & \(0.963\pm 0.002\) & \(0.820\pm 0.005\) & \(0.974\pm 0.0009\) & \(0.819\pm 0.006\) & \(0.9771\pm 0.0008\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Pearson and Spearman correlation using one-stage Model across the three different reference sequence representations. In our experiment, we chose 5 neighboring nucleotides for both sides to represent the overhangs.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multicolumn{2}{c}{} & \multicolumn{2}{c}{One-stage Model} & \multicolumn{2}{c}{Two-stage Model} \\ Libraries & Spearman & Pearson & Spearman & Pearson \\ \hline SpKP-ABEmax & \(0.851\pm 0.006\) & \(0.983\pm 0.001\) & \(0.873\pm 0.001\) & \(0.986\pm 0.001\) \\ SpCap-ABEmax & \(0.881\pm 0.006\) & \(0.989\pm 0.0005\) & \(0.879\pm 0.004\) & \(0.991\pm 0.001\) \\ SpG-ABEmax & \(0.865\pm 0.004\) & \(0.989\pm 0.0003\) & \(0.887\pm 0.003\) & \(0.991\pm 0.0006\) \\ SpK-ABEbe & \(0.779\pm 0.003\) & \(0.968\pm 0.002\) & \(0.862\pm 0.003\) & \(0.974\pm 0.001\) \\ SpCap-ABEbe & \(0.857\pm 0.007\) & \(0.945\pm 0.0006\) & \(0.856\pm 0.003\) & \(0.597\pm 0.002\) \\ SpG-ABEbe & \(0.820\pm 0.005\) & \(0.974\pm 0.0009\) & \(0.865\pm 0.004\) & \(0.978\pm 0.0008\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Prediction performance all outcomes (i.e. including wild-type sequences).
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multicolumn{2}{c}{} & \multicolumn{
larly in adopting an encoder-encoder architecture over the traditional sequence-to-sequence (encoder-decoder) model. Then we extended their model to use both the protospacer and PAM as the reference sequence and compared it with our proposed multi-task model (trained on eight data sets using protospacer and PAM information as target sequence).
Results in Table 7 show that our model consistently outperforms BE-DICT. Furthermore, considering computational efficiency during model training (on SpRY-ABE8e data using the protospacer and PAM reference sequence representation), BE-DICT takes in the order of minute per epoch (wall time), while our single-task model accomplishes the same task in the order of seconds ( 15 seconds). Notably, the multi-task learning model trained jointly on all six datasets takes 21 seconds per epoch.
This highlights the benefits of avoiding the complex sequence-to-sequence architecture in favor of a streamlined encoder-encoder structure. This choice not only improves the computational efficiency but also leads to model predictive performance improvements. Moreover, the introduction of a two-stage model and a multi-task framework amplifies these performance gains even further. We present additional results for comparisons with other baselines in Table 8 in the appendix.
To assess our model's performance against other state-of-the-art models, we conducted evaluations using the test sets provided by these models. Table 8 displays our findings, which include three most recent models: BE-HIVE (Arbab et al., 2020), DeepABE (Song et al., 2020), and BEDICT (Marquart et al., 2021), along with their respective test sets labeled as A. et al., S. et al., and M. et al.
The idea is to take the published trained model and evaluate their performance on those various test sets. For the three baseline models, we refer to the results reported in the BEDICT paper. As for our model, to ensure fairness in comparison, we used our single-stage model trained on SpG-ABEmax libraries since most baselines, except DeepABE, do not incorporate the PAM as input. The results correspond to two scenarios: 1) considering all possible outcomes, and 2) only considering non-wild type outcomes. The results for the non-wild type outcomes correspond to the model prediction where we only consider non-wild outcomes. In the case of non-wild-type outcome prediction, we mention that other models were trained exclusively on non-wild outcomes, with outcomes per sequence being renormalized. Our one-stage model, however, was trained on data encompassing all outcomes, so we report non-wild-type results with outcomes renormalized for a fair comparison.
## 5 Conclusion
Our work provides a detailed assessment of the modeling approaches for base editor outcome prediction. Through the development of a unified model, we transcend the limitations of single-editor models and pave the way for more versatile and comprehensive tools. By combining self-attention mechanisms and multi-task learning, we capture the nuances of editing outcomes across various editors, enhancing the accuracy and applicability of our predictions.
As the first machine learning-focused paper in the domain of base editor outcome prediction, our work represents a stepping stone toward a more systematic and informed modeling approach to genome editing. We explored the different modeling decisions from one-stage to two-stage models, and from single-task to multi-task learning. We evaluated the different sequence representations and benchmarked our best model with one of the main models developed for base editing outcome prediction. We believe that further work studying systematically the different modeling decisions for genome editing will help guide researchers toward more promising editing strategies that in turn will bring advancements in gene therapy and disease modeling.
For the future, given the current absence of standardized and systematic benchmark datasets in the field, we aim to bridge this gap by creating standard benchmark datasets, establishing baseline models, and proposing better performance metrics. This initiative will provide the machine-learning community with a solid foundation for testing a wide array of innovative ideas and approaches.
## Acknowledgements
We thank G. Schwank, K. Marquart, L. Kissling and S. Janjuha for input on the CRISPR-Cas and base editing technology and for data sharing and preprocessing. This work was supported by the URPP 'Human Reproduction Reloaded' and 'University Research Priority Programs'.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline \multicolumn{2}{c}{} & \multicolumn{3}{c}{BE-DICT} & \multicolumn{3}{c}{Ours} \\ reference sequence & Libraries & Spearman & Person & Spearman & Person \\ \hline protospacer & SpRY-ABE8e & 0.801 & 0.943 & 0.835 & 0.981 \\ & SpRY-ABE8e & 0.746 & 0.861 & 0.776 & 0.965 \\ & SpRY-ABE8e & 0.804 & 0.951 & 0.870 & 0.987 \\ & SpRY-ABE8e & 0.762 & 0.850 & 0.860 & 0.975 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Performance comparison with the baselines
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{All Outcomes} & \multicolumn{3}{c}{Non wild-types} \\ Datasets & A.et al & S. et al & M. et al & A.et al & S. et al & M. et al \\ \hline BEDICT & 0.96 & 0.94 & 0.86 & 0.81 & 0.90 & 0.82 \\ DeepABE & 0.86 & 0.93 & 0.8 & 0.86 & 0.96 & 0.84 \\ BE-HIVE & 0.71 & 0.88 & 0.74 & 0.92 & 0.93 & 0.81 \\ Our model & 0.972 & 0.974 & 0.972 & 0.939 & 0.945 & 0.953 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Model performance on the test set from the different published studies. Columns represent test sets, rows represent models used | 人間遺伝病は、ポイント変異が主な原因で発生するケースが多く、精密なゲノム編集技術の必要性を強調しています。これらのうち、ベースエドITINGは、単一ヌクレオチドレベルで標的的に変化を起こせることが特徴であり、臨床応用が阻害されています。これは低い編集効率と意図しない変異によるものです。そのため、実験室での試験とエラーの膨大な試行が必要になります。このプロセスを加速させるために、私たちは、ある特定のゲノムターゲット配列に対する編集結果の確率を予測できる、注意ベースの2段階の機械学習モデルを提案しました。さらに、複数のベースエディター(つまり変異)を同時に学習できるように、多タスク学習のスケEMAを提案しました。このモデルの予測は、複数のデータセットとベースエディター変異に関する実証実験結果との高い相関関係を示しました。これらの結果は、モデルがベースエッティング |
2307.01607 | Rational Variant of Hartogs' Theorem on Separate Holomorphicity | Given an uncountable algebraically closed field $K$, we proved that if
partially defined function $f\colon K \times \dots \times K \dashrightarrow K$
defined on a Zariski open subset of the $n$-fold Cartesian product $K \times
\dots \times K$ is rational in each coordinate whilst other coordinates are
held constant, then $f$ is itself a rational function in $n$-variables. | Hanwen Liu | 2023-07-04T09:44:58 | http://arxiv.org/abs/2307.01607v2 | # Rational Variant of Hartogs' Theorem
###### Abstract
Given an uncountable algebraically closed field \(K\), we proved that if partially defined function \(f\colon K\times\cdots\times K\dashrightarrow K\) defined on a Zariski open subset of the \(n\)-fold Cartesian product \(K\times\cdots\times K\) is rational in each coordinate whilst other coordinates are held constant, then \(f\) is itself a rational function in \(n\)-variables.
## 1 Introduction and Backgrounds
We say _partial function_ to mean partially defined function. For any partial function \(f\colon A\dashrightarrow B\), we shall denote by \(\operatorname{dom}f\) its _domain of definition_. For simplicity we introduce the following notation.
**Definition 1.1**.: _Let \(A_{1},\ldots,A_{n},B\) be arbitrary sets and \(f\colon A_{1}\times\cdots\times A_{n}\dashrightarrow B\) a partial function. For each fixed \((a_{1},\ldots,a_{n})\in A_{1}\times\cdots\times A_{n}\) and \(i\in\{1,\ldots,n\}\), define \(f^{a_{i+1,\ldots,a_{n}}}_{a_{1},\ldots,a_{i-1}}\colon A_{i}\dashrightarrow B\) to be the partial function satisfying the following conditions:_
1. \(\operatorname{dom}f^{a_{i+1,\ldots,a_{n}}}_{a_{1},\ldots,a_{i-1}}=\{a\in A_{i} \mid(a_{1},\ldots,a_{i-1},a,a_{i+1},\ldots,a_{n})\in\operatorname{dom}f\}\)_,_
2. \(f^{a_{i+1,\ldots,a_{n}}}_{a_{1},\ldots,a_{i-1}}(a)=f\left(a_{1},\ldots,a_{i-1}, a,a_{i+1},\ldots,a_{n}\right)\) _for all_ \((a_{1},\ldots,a_{i-1},a,a_{i+1},\ldots,a_{n})\in\operatorname{dom}f\)_._
**Example 1.2**.: _Consider the complex polynomial function \(f\colon(z,w)\mapsto z^{3}+zw^{3}\) defined on \(\mathbb{C}\times\mathbb{C}\). For any \(a\in\mathbb{C}\), according to Definition 1.1 we have that \(f^{a}\) is the entire function \(z\mapsto z^{3}+a^{3}z\), and \(f_{a}\) is the entire function \(w\mapsto aw^{3}+a^{3}\)._
Throughout this article, varieties and regular mappings are in the sense of FAC. We shall recall the definition of a _rational map_.
**Definition 1.3**.: _Let \(X\) and \(Y\) be varieties over an algebraically closed field \(k\), a partial function \(f\colon X\dashrightarrow Y\) is termed a rational map, if \(\operatorname{dom}f\) is a Zariski dense open subset of \(X\) and \(f|_{\operatorname{dom}f}\) is a morphism of varieties over \(k\)._
**Definition 1.4**.: _Let \(X\) be a variety over an algebraically closed field \(k\), a rational map \(f\colon X\dashrightarrow k\) is also said to be a rational function on \(X\)._
The main purpose of this note is to prove the following statement.
**Theorem 1.5**.: _Let \(k\) be an algebraically closed field of at least continuum cardinality and \(f\colon k^{n}\dashrightarrow k\) be a partial function defined on a nonempty Zariski open subset of \(k^{n}\), if \(f^{a_{i+1},\ldots,a_{n}}_{a_{1},\ldots,a_{i-1}}\colon k\dashrightarrow k\) is a rational function for each \((a_{1},\ldots,a_{n})\in\operatorname{dom}f\) and \(i\in\{1,\ldots,n\}\), then \(f\) is a rational function._
## 2 Failure of the theorem in the case of countable cardinality
In this section, by constructing an explicit counter-example, we show that the assertion in Theorem 1.5 does not hold for any cocountable algebraically closed field.
**Proposition 2.1**.: _Let \(k\) be a countable algebraically closed field, then there exists a function \(f\colon k\times k\to k\) such that \(f_{a}\) and \(f^{a}\) are polynomial functions for all \(a\in k\), but \(f\) is not a rational function._
Proof.: Since \(k\) is countably infinite, there exists a sequence \(\left(a_{n}\right)_{n=0}^{\infty}\) of elements in \(k\), such that the correspondence \(n\mapsto a_{n}\) is a bijection between \(k\) and \(\mathbb{N}\). Define function \(f\colon k\times k\to k\) by \(f\left(a_{n},a_{m}\right):=\sum_{i=0}^{n+m}\prod_{l=0}^{i}\left(a_{n}-a_{l} \right)\left(a_{m}-a_{l}\right)\).
Fix any \(n\in\mathbb{N}\), then by construction \(f^{a_{m}}(a)=\sum_{i=0}^{m}\prod_{n=0}^{i}\left(a-a_{n}\right)\left(a_{m}-a_{n}\right)\) for all \(a\in k\) and in particular \(f^{a_{m}}\) is a polynomial function of degree \(m\). By symmetry of variables, \(f_{a}\) is also a polynomial function for each \(a\in k\).
Provided that \(f\) is a rational function, then by Hilbert's Nullstellensatz (c.f. Theorem 4.8 in [1]) \(f\) is a polynomial function since \(\operatorname{dom}f=k^{2}\). But then \(\deg f\geqslant\deg f^{a_{n}}=n\to\infty\) as \(n\to\infty\), contradiction.
## 3 Proof of the theorem in the case of continuum cardinality
In this section we prove that the assertion in Theorem 1.5 holds for algebraically closed fields of continuum cardinality. The absolute value of any complete non-Archimedean field is assumed tacitly to be non-trivial.
**Lemma 3.1**.: _Let \(X\) be a normal variety over an algebraically closed field \(k\) and let \(f\colon X\dasharrow k\) be a rational function, if for every point \(p\in X\setminus\operatorname{dom}f\) there exists an algebraic curve \(C\) on \(X\) passing through \(p\), such that \(C\bigcap\operatorname{dom}f\neq\varnothing\) and \(p\) is a removable singularity of the rational function \(\left.f\right|_{C}:C\dasharrow k\), then there exists a regular function \(h\colon X\to k\) such that \(\left.h\right|_{\operatorname{dom}f}\equiv f|_{\operatorname{dom}f}\)._
Proof.: Identify \(k\) with the standard affine chart \(\left\{\left[a:1\right]\in k\mathbb{P}^{1}\mid a\in k\right\}\) of \(\mathbb{P}^{1}:=k\mathbb{P}^{1}\). Since \(X\) is normal and \(\mathbb{P}^{1}\) is proper, there exists \(2\) -codimensional subvarieties \(Z_{1},\dots,Z_{n}\) of \(X\) and a rational map \(\varphi\colon X\dasharrow\mathbb{P}^{1}\) defined on \(X_{0}:=X\setminus\bigcup_{i=1}^{n}Z_{i}\) such that \(\operatorname{dom}f\subseteq X_{0}\) and \(\varphi(p)=\left[f(p):1\right]\) for all \(p\in\operatorname{dom}f\).
By assumption, for any \(p\in X_{0}\), there exists a Zariski open neighbourhood \(U\) of \(p\) in \(X_{0}\) and an algebraic curve \(C\) in \(U\) passing through \(p\), such that \(\left[1:0\right]\notin\varphi(C)\). Therefore \(\varphi\left(X_{0}\right)\subseteq k\) and hence there exists rational function \(g\colon X\dasharrow k\) such that the following diagram commutes:
Since \(X\) is normal and \(X\setminus\operatorname{dom}g=\bigcup_{i=1}^{n}Z_{i}\) is of codimension at least \(2\), by Hartogs' extension theorem we obtain that \(Z_{1},\dots,Z_{n}\) are removable singularities of \(g\). Notice that by construction \(g|_{\operatorname{dom}f}\equiv f|_{\operatorname{dom}f}\), the statement is proved.
**Corollary 3.2**.: _Let \(X,Y\) be normal varieties over an algebraically closed field \(k\) and \(f\colon X\times Y\dasharrow k\) be a partial function defined on a nonempty Zariski open subset of \(X\times Y\), if \(f_{a},f^{b}\) are rational functions for all \((a,b)\in\operatorname{dom}f\) and there exists a rational function \(g\colon X\times Y\dasharrow k\) such that \(f(a,b)=g(a,b)\) for all \((a,b)\in\operatorname{dom}f\bigcap\operatorname{dom}g\), then \(f\) is a rational function._
Proof.: The desired result follows immediately from Lemma 3.1.
**Proposition 3.3**.: _Let \(X_{1},\dots,X_{n}\) be normal varieties over an algebraically closed field \(k\) and \(f\colon X_{1}\times\dots\times X_{n}\dasharrow k\) be a partial function defined on a nonempty Zariski open subset of \(X_{1}\times\dots\times X_{n}\), if \(f\) satisfies the following conditions:_
1. _for each_ \((a_{1},\ldots,a_{n})\in\operatorname{dom}f\) _and_ \(i\in\{1,\ldots,n\},\quad f^{a_{i+1},\ldots,a_{n}}_{a_{i+1},\ldots,a_{i-1}}\)_:_ \(X_{i}\dashrightarrow k\) _is a rational function,_
2. _there exists Zariski dense subset_ \(\Lambda_{i}\) _of_ \(X_{i}\) _for each_ \(i\in\{1,\ldots,n\}\) _and a rational function_ \(g\colon X_{1}\times\cdots\times X_{n}\dashrightarrow k\)_, such that_ \(f\left(a_{1},\ldots,a_{n}\right)=g\left(a_{1},\ldots,a_{n}\right)\) _for all_ \((a_{1},\ldots,a_{n})\in(\Lambda_{1}\times\cdots\times\Lambda_{n})\bigcap\)__\(\operatorname{dom}f\bigcap\operatorname{dom}g\)_,_
_then \(f\) is a rational function._
Proof.: By induction it suffices to prove the statement for \(n=2\).
By virtue of Corollary 3.2 we can assume w.l.o.g. that \(\operatorname{dom}f=\operatorname{dom}g\) and \(X_{1}=\{a\in X_{1}\mid\exists\,b\in X_{2}\allowbreak(a,b)\in\operatorname{ dom}f\ \}\).
For any \(b\in\Lambda_{2}\), since \(\Lambda_{1}\) is a Zariski dense subset of \(X_{1}\) and \(f^{b}(a)=g^{b}(a)\) for all \(a\in\Lambda_{1}\bigcap\operatorname{dom}f^{b}\), we obtain that \(f^{b}\equiv g^{b}\).
For any \(a\in X_{1}\), since \(\Lambda_{2}\) is a Zariski dense subset of \(X_{2}\) and \(f_{a}(b)=f^{b}(a)=g^{b}(a)=g_{a}(b)\) for all \(b\in\Lambda_{2}\bigcap\operatorname{dom}f_{a}\), we obtain that \(f_{a}\equiv g_{a}\). Therefore we have \(f\equiv g\).
**Lemma 3.4**.: _Let \(k[t]\) be the polynomial ring over an arbitrary field \(k\) and let \(\mathbb{F}\) be a subfield of \(k\), then \(k(t)\bigcap\mathbb{F}((t))=\mathbb{F}(t)\)._
Proof.: This is a direct consequence of Lemma 27.9 in [2].
**Proposition 3.5**.: _Let \(n\in\mathbb{N}\) and \(k\left[x_{0},\ldots,x_{n+1}\right]\) be the ring of polynomials in \(n+2\) variables over an arbitrary field \(k\), then \(k\left(x_{0},\ldots,x_{n+1}\right)=\bigcap_{i=0}^{n+1}k\left(\left(x_{0}, \ldots,x_{i-1},x_{i+1},\ldots,x_{n+1}\right)\right)(x_{i})\)._
Proof.: Define \(K:=k\left(x_{1},\ldots,x_{n}\right)\) and define \(L_{i}:=k\left(\left(x_{0},\ldots,x_{i-1},x_{i+1},\ldots,x_{n+1}\right)\right) (x_{i})\) for each \(i=0,\ldots,n+1\). For simplicity we denote \(x:=x_{0}\) and \(y:=x_{n+1}\).
Since \(L_{i}\subseteq k((y))\left(\left(x_{0},\ldots,x_{i-1},x_{i+1},x_{n}\right) \right)(x_{i})\) for all \(i\in\{0,\ldots,n\}\), by induction we obtain \(\bigcap_{i=0}^{n}L_{i}\subseteq k((y))\left(x_{0},\ldots,x_{n}\right) \subseteq k\left(x_{0},\ldots,x_{n}\right)\left((y)\right)(x)=K((y))(x)\). Analogously we have \(\bigcap_{i=1}^{n+1}L_{i}\subseteq K((x))(y)\subseteq K(y)((x))\). By Lemma 3.4 we conclude that \(\bigcap_{i=0}^{n+1}L_{i}\subseteq K((y))(x)\bigcap K(y)((x))=(K(y))(x)=k\left( x_{0},\ldots,x_{n+1}\right)\).
Conversely, since \(k\left(x_{0},\ldots,x_{n+1}\right)\subseteq L_{i}\) for all \(i\in\{0,\ldots,n+1\}\), we obtain \(k\left(x_{0},\ldots,x_{n+1}\right)\subseteq\bigcap_{i=0}^{n+1}L_{i}\).
**Lemma 3.6**.: _Let \(k[t]\) be the polynomial ring over an arbitrary field \(k\) and let \(\sum_{i=0}^{\infty}a_{i}t^{i}\in k[[t]]\) be a formal power series with coefficients in \(k\), then \(\sum_{i=0}^{\infty}a_{i}t^{i}\in k(t)\) if and only if there exists \(l,m\in\mathbb{N}\) such that the matrices_
\[\left[\begin{array}{cccc}a_{n}&a_{n+1}&\ldots&a_{n+m}\\ a_{n+1}&a_{n+2}&\ldots&a_{n+m+1}\\ \vdots&\vdots&\ddots&\vdots\\ a_{n+m}&a_{n+m+1}&\ldots&a_{n+2m}\end{array}\right]\]
_are degenerated for all integers \(n\geqslant l\)._
Proof.: This is a direct consequence of Lemma 5 of Chapter V.5 in [3].
**Proposition 3.7**.: _Let \(A:=k\left\langle x_{1},\ldots,x_{l}\right\rangle\) be the \(l\)-dimensional Tate algebra over a complete non-Archimedean field \(k\) and \(\sum_{i=0}^{\infty}h_{i}t^{i}\in A[[t]]\) be a formal power series with coefficients in \(A\), if \(\sum_{i=0}^{\infty}h_{i}\left(a_{1},\ldots,a_{l}\right)t^{i}\in k(t)\) for all \(a_{1},\ldots,a_{n}\in\{a\in k:|a|\leqslant 1\}\) then \(\sum_{i=0}^{\infty}h_{i}t^{i}\in\operatorname{Frac}(A[t])\)._
Proof.: Denote by \(\mathfrak{o}_{k}:=\{a\in k:|a|\leqslant 1\}\) the valuation ring of \(k\). For any \(n,m\in\mathbb{N}\) define the Hankel determinant by
\[H_{n}^{m}:=\left|\begin{array}{cccc}h_{n}&h_{n+1}&\ldots&h_{n+m}\\ h_{n+1}&h_{n+2}&\ldots&h_{n+m+1}\\ \vdots&\vdots&\ddots&\vdots\\ h_{n+m}&h_{n+m+1}&\ldots&h_{n+2m}\end{array}\right|\]
and define \(\Lambda_{n}^{m}:=\left\{\left(a_{1},\ldots,a_{l}\right)\in\mathfrak{o}_{k}^{l} \mid H_{n+i}^{m}\left(a_{1},\ldots,a_{l}\right)=0\forall i\in\mathbb{N}\right\}\). Since \(H_{n}^{m}\in A\) for all \(n,m\in\mathbb{N}\), we obtain that \(\Lambda_{n}^{m}\) are closed subsets of \(\mathfrak{o}_{k}^{l}\) for all \((n,m)\in\mathbb{N}^{2}\). since \(\sum_{i=0}^{\infty}h_{i}\left(a_{1},\ldots,a_{l}\right)t^{i}\in k(t)\) for all
\(\left(a_{1},\ldots,a_{l}\right)\in\mathfrak{o}_{k}^{l}\), by Lemma 3.6\(\mathfrak{o}_{k}^{l}=\bigcup_{n=0}^{\infty}\bigcup_{m=0}^{\infty}\Lambda_{n}^{n}\). Since \(\mathfrak{o}_{k}^{l}\) is a complete metric space, by Baire's category theorem, there exists \(s,r\in\mathbb{N}\) such that \(\Lambda_{s}^{r}\) admits a nonempty interior. Therefore, for all \(i\in\mathbb{N}\), by construction \(H_{s+i}^{r}\) vanishes on a nonempty open subset of \(\mathfrak{o}_{k}^{l}\) and hence is identically zero by the analytic continuation principle for restricted power series. Again by Lemma 3.6, we conclude that \(\sum_{i=0}^{\infty}h_{i}i^{t}\in\left(\operatorname{Frac}A\right)(t)= \operatorname{Frac}(A[t])\).
**Proposition 3.8**.: _Let \(k\left[x_{1},\ldots,x_{l}\right]\) be the ring of polynomial in \(l\) variables over a complete non-Archimedean field \(k\) and let \(\sum_{i=0}^{\infty}\cdots\sum_{i_{n}=0}^{\infty}c_{i_{1}\ldots i_{n}}x_{1}^{i _{1}}\ldots x_{l}^{i_{l}}\in k\left\langle x_{1},\ldots,x_{l}\right\rangle\) be a power series converge on the \(l\)-fold cartesian product \(\mathfrak{o}_{k}^{l}\) of the valuation ring \(\mathfrak{o}_{k}\) of \(k\), if_
\[\sum_{i_{1}=0}^{\infty}\cdots\sum_{i_{n}=0}^{\infty}\left(c_{i_{1}\ldots i_{ n}}a_{1}^{i_{1}}\ldots a_{n-1}^{i_{n-1}}a_{n+1}^{i_{n+1}}\ldots a_{l}^{i_{l}} \right)x_{n}^{i_{n}}\in k\left(x_{n}\right)\]
_for each \(\left(a_{1},\ldots,a_{l}\right)\in\mathfrak{o}_{k}^{l}\) and \(n\in\left\{1,\ldots,l\right\}\), then \(\sum_{i_{1}=0}^{\infty}\cdots\sum_{i_{n}=0}^{\infty}c_{i_{1}\ldots i_{n}}x_{1} ^{i_{1}}\ldots x_{l}^{i_{l}}\in k\left(x_{1},\ldots,x_{l}\right)\)._
Proof.: If \(l=1\) then the statement holds trivially. For the subsequent proof we shall assume \(l\geqslant 2\). By Proposition 3.7 we have that \(\sum_{i_{1}=0}^{\infty}\cdots\sum_{i_{n}=0}^{\infty}c_{i_{1}\ldots i_{n}}x_{1 }^{i_{1}}\ldots x_{l}^{i_{l}}\in\operatorname{Frac}\left(k\left\langle x_{1}, \ldots,x_{n-1},x_{n+1},\ldots,x_{l}\right\rangle\left[x_{n}\right]\right) \subseteq k\left(\left(x_{1},\ldots,x_{n-1},x_{n+1},\ldots,x_{l}\right) \right)\left(x_{n}\right)\) for all \(n=1,\ldots,l\). The desired result then follows from Proposition 3.5.
**Theorem 3.9**.: _Let \(k\left[x_{1},\ldots,x_{n}\right]\) be the ring of polynomials in \(n\) variables over an algebraically closed complete non-Archimedean field \(k\) and let \(f\colon\mathbb{D}^{n}\to k\) be an arbitrary function defined on the \(n\)-fold cartesian product of the unit disc \(\mathbb{D}:=\left\{a\in k:\left|a\right|<1\right\}\) in \(k\), if for each \(\left(a_{1},\ldots,a_{n}\right)\in\mathbb{D}^{n}\) and each \(j\in\left\{1,\ldots,n\right\}\) there exists a power series \(\sum_{i=0}^{\infty}b_{i}x_{j}^{i}\in k\left\{x_{j}\right\}\) converges in \(\mathbb{D}\) such that \(f\left(a_{1},\ldots,a_{i-1},a,a_{i+1},\ldots,a_{n}\right)=\sum_{i=0}^{\infty}b _{i}a^{i}\) for all \(a\in\mathbb{D}\), then there exists a power series \(\sum_{i_{1}=0}^{\infty}\cdots\sum_{i_{n}=0}^{\infty}c_{i_{1}\ldots i_{n}}x_{1 }^{i_{1}}\ldots x_{n}^{i_{n}}\in k\left\{x_{1},\ldots,x_{n}\right\}\) converge in \(\mathbb{D}^{n}\) such that \(f\left(a_{1},\ldots,a_{n}\right)=\sum_{i_{1}=0}^{\infty}\cdots\sum_{i_{n}=0}^{ \infty}c_{i_{1}\ldots i_{n}}a_{1}^{i_{1}}\ldots a_{n}^{i_{n}}\) for all \(\left(a_{1},\ldots,a_{n}\right)\in\mathbb{D}^{n}\)._
Proof.: This is a direct consequence of Stawski's theorem in [4].
**Lemma 3.10**.: _Let \(K\) be a complete non-Archimedean field and let \(\mathcal{O}_{K}\) be its valuation ring, then_
* _the cardinality of_ \(\mathcal{O}_{K}\) _is at least continuum,_
* \(\mathcal{O}_{K}\) _is a Zariski dense subset of_ \(K\)_._
Proof.: Since the metric of \(\mathcal{O}_{K}\) is by assumption nontrivial, by the general theory of topological groups, we conclude that \(\mathcal{O}_{K}\) admits no isolated points. Therefore, \(\mathcal{O}_{K}\) is a perfect complete metrizable space, and hence is of at least continuum cardinality by Proposition 6.6.5 in [5].
The second assertion is a direct consequence of \(\dot{1}\)).
**Theorem 3.11**.: _Let \(K\) be an algebraically closed complete non-Archimedean field and let \(f\colon K^{n}\dashrightarrow K\) be a partial function defined on a nonempty Zariski open subset of \(K^{n}\), if \(f_{a_{1},\ldots,a_{i-1}}^{a_{i+1},\ldots,a_{n}}\colon K\dashrightarrow K\) is a rational function for every \(\left(a_{1},\ldots,a_{n}\right)\in\operatorname{dom}f\) and \(i\in\left\{1,\ldots,n\right\}\), then \(f\) is a rational function._
Proof.: Denote by \(\mathcal{O}_{K}\) the valuation ring of \(K\) and denote by \(K\left[x_{1},\ldots,x_{n}\right]\) the ring of polynomials in \(n\) variables over \(K\). Up to a homothety we can assume w.l.o.g. that \(\mathcal{O}_{K}^{\oplus n}\subseteq\operatorname{dom}f\). By Theorem 3.9 there exists \(\sum_{i_{1}=0}^{\infty}\cdots\sum_{i_{n}=0}^{\infty}c_{i_{1}\ldots i_{n}}x_{1}^{i _{1}}\ldots x_{n}^{i_{n}}\in K\left\langle x_{1},\ldots,x_{n}\right\rangle\) such that \(f\left(a_{1},\ldots,a_{n}\right)=\sum_{i_{1}=0}^{\infty}\cdots\sum_{i_{n}=0}^{ \infty}c_{i_{1}\ldots i_{n}}a_{1}^{i_{1}}\ldots a_{n}^{i_{n}}\) for all \(a_{1},\ldots,a_{n}\in\mathcal{O}_{K}\). By Proposition 3.8 and the assumption on \(f\), we obtain that \(\sum_{i_{1}=0}^{\infty}\cdots\sum_{i_{n}=0}^{\infty}c_{i_{1}\ldots i_{n}}x_{1}^{i _{1}}\ldots x_{n}^{i_{n}}\in K\left(x_{1},\ldots,x_{n}\right)\). By Lemma 3.10 and Proposition 3.3 we conclude that \(f\) is a rational function.
**Corollary 3.12**.: _Let \(k\) be an algebraically closed field of continuum cardinality and \(f\colon k^{n}\dashrightarrow k\) be a partial function defined on a nonempty Zariski open subset of \(k^{n}\), if \(f_{a_{1},\ldots,a_{i-1}}^{a_{i+1},\ldots,a_{n}}\colon k\dashrightarrow k\) is a rational function for every \(\left(a_{1},\ldots,a_{n}\right)\in\operatorname{dom}f\) and \(i\in\left\{1,\ldots,n\right\}\), then \(f\) is a rational function._
Proof.: By Theorem 3.11 it suffices to prove that \(k\) admits a complete non-Archimedean absolute value. Indeed, by the model theory of algebraically closed fields, if \(\operatorname{chark}=p>0\) then \(k\) is isomorphic to the completion of an algebraic closure of the local field \(\mathbb{F}_{p}((t))\), and if \(\operatorname{char}k=0\) then \(k\) is isomorphic to the \(p\)-adic Tate field
Proof of the theorem in the case of cardinality beyond continuum
In this section we prove that the assertion in Theorem 1.5 holds for algebraically closed fields of cardinality exceeding continuum.
For any algebraically closed field \(k\) and rational function \(f\colon k\dasharrow k\), as usual we denote by \(\deg f\) the mapping degree of \(f\) and denote by \(\operatorname{ord}_{\infty}f\) the order of \(f\) at infinity.
**Lemma 4.1**.: _Let \(X\) be an algebraic subset of the \(n\)-dimensional affine space over an algebraically closed field \(k\) and let \(K\) be an algebraically closed subfield of \(k\), then \(X\bigcap K^{n}\) is a Zariski dense subset of \(X\)._
Proof.: Denote \(m:=\dim_{k}X\) then by Noether's normalization lemma there exists a \(k\)-linear transform \(T\colon k^{n}\longrightarrow k^{m}\) such that \(f:=\left.T\right|_{X}\) is a surjective finite morphism. In particular, the \(k\)-linearity of \(T\) yields \(f^{-1}\left(K^{m}\right)=X\bigcap K^{n}\). Since \(K\) is infinite as it is algebraically closed, and the Zariski topology on the affine line \(k^{1}\) is cofinite, we obtain that \(K^{m}\) is a Zariski dense subset of \(k^{m}\).
Since \(k^{m}\) is normal, we have that \(\phi\) is an open mapping. Take any nonempty Zariski open subset \(U\) of \(X\), then \(f(U)\bigcap K^{m}\neq\varnothing\). Since \(f\) is surjective, we have \(U\bigcap\left(X\bigcap K^{n}\right)=f^{-1}\left(f(U)\bigcap K^{m}\right)\neq\varnothing\).
**Proposition 4.2**.: _Let \(X\) be a variety over an uncountable algebraically closed field \(k\) and let \((X_{i})_{i=0}^{\infty}\) be a sequence of Zariski closed subsets of \(X\), if \(X\neq X_{i}\) for all \(i\in\mathbb{N}\) then \(X\neq\bigcup_{i=0}^{\infty}X_{i}\)._
Proof.: By passing to an affine patch of \(X\) and applying Noether's normalization lemma we can assume w.l.o.g. that \(X=k^{n}\). Apply induction on \(n\in\mathbb{N}\):
If \(n=0\) then the statement holds trivially. For \(n=1\) we have that \(X_{i}\) is finite for all \(i\in\mathbb{N}\) and hence \(\bigcup_{i=0}^{\infty}X_{i}\) is countable, while by assumption \(k^{1}\) is not.
Suppose that \(n\geqslant 2\). Since the Grassmannian \(\operatorname{Gr}\left(n-1,k^{n}\right)\cong\mathbb{P}_{n-1}(k)\) is uncontable, by Dirichlet's Schubfachprinzip there exists a hyperplane \(H\) in \(k^{n}\) such that \(H\neq X_{i}\bigcap H\) for all \(i\in\mathbb{N}\). By the induction hypothesis we have \(H\neq\bigcup_{i=0}^{\infty}X_{i}\bigcap H\). Therefore in particular \(k^{n}\neq\bigcup_{i=0}^{\infty}X_{i}\).
**Proposition 4.3**.: _Let \(X\) be a variety over an algebraically closed field \(k\) of at least continuum cardinality and let \((\Lambda_{i})_{i=0}^{\infty}\) be a sequence of subsets of \(X\), if \(X=\bigcup_{i=0}^{\infty}\Lambda_{i}\), then there exists an integer \(n\geqslant 0\) and a Zariski dense subset \(\Lambda\) of \(X\), such that \(\Lambda\subseteq\Lambda_{n}\) and \(\Lambda\) is of at most continuum cardinality._
Proof.: By passing to a Zariski dense open subset of \(X\), we can assume w.l.o.g. that \(X\) is a subvariety of affine space \(k^{n}\). By Lowenheim-Skolem theorem downward, there exists an algebraically closed subfield \(K\) of \(k\), such that \(K\) is of continuum cardinality. By virture of Lemma 4.1, we can assume w.l.o.g. that the cardinality of \(k\) is continuum. Since \(k\) is uncountable and algebraically closed, by Proposition 4.2 there exists \(n\in\mathbb{N}\) such that \(\Lambda_{n}\) is not Zariski nowhere-dense in \(X\). Since \(X\) is irreducible, we obtain that \(\Lambda_{n}\) is a Zariski dense subset of \(X\). Since \(\Lambda_{n}\subseteq X\subseteq k^{n}\), and \(k^{n}\) is equinumerous to \(k\), we conclude that the cardinality of \(\Lambda_{n}\) is at most continuum.
**Lemma 4.4**.: _Let \(a,a_{0},\ldots,a_{l}\) be \(l+2\) elements of an algebraically closed field \(k\) and \(P,Q:k\to k\) be polynomial functions, if \(\deg P=n\) and \(\deg Q=m=l-n\), then the determinant_
\[\Delta=\left|\begin{array}{cccccccc}1&a&\ldots&a^{m}&0&0&\ldots&0\\ P\left(a_{0}\right)&P\left(a_{0}\right)a_{0}&\ldots&P\left(a_{0}\right)a_{0}^{m} &Q\left(a_{0}\right)&Q\left(a_{0}\right)a_{0}&\ldots&Q\left(a_{0}\right)a_{0} ^{n}\\ \vdots&\vdots&\ddots&\vdots&\vdots&\vdots&\ddots&\vdots\\ P\left(a_{l}\right)&P\left(a_{l}\right)a_{l}&\ldots&P\left(a_{l}\right)a_{l}^{m }&Q\left(a_{l}\right)&Q\left(a_{l}\right)a_{l}&\ldots&Q\left(a_{l}\right)a_{ l}^{n}\end{array}\right|\]
_is equal to \(Q(a)\cdot\operatorname{res}(P,Q)\prod_{i=0}^{l-1}\prod_{j=i+1}^{l}\left(a_{j}-a _{i}\right)\)._
Proof.: Define the following matrices
\[A=\left[\begin{array}{cccccc}Q\left(a_{0}\right)&Q\left(a_{0}\right)a_{0}& \ldots&Q\left(a_{0}\right)a_{0}^{n}\\ Q\left(a_{1}\right)&Q\left(a_{1}\right)a_{1}&\ldots&Q\left(a_{1}\right)a_{1}^{n} \\ \vdots&\vdots&\ddots&\vdots\\ Q\left(a_{n}\right)&Q\left(a_{n}\right)a_{n}&\ldots&Q\left(a_{n}\right)a_{n}^{ n}\end{array}\right]\]
\[B=\left[\begin{array}{cccccc}P\left(a_{0}\right)\left(a_{0}-a\right)&P \left(a_{0}\right)\left(a_{0}-a\right)a_{0}&\ldots&P\left(a_{0}\right)\left(a_ {0}-a\right)a_{0}^{m-1}&Q\left(a_{0}\right)&Q\left(a_{0}\right)a_{0}&\ldots&Q \left(a_{0}\right)a_{0}^{n}\\ P\left(a_{1}\right)\left(a_{1}-a\right)&P\left(a_{1}\right)\left(a_{1}-a\right) a_{1}&\ldots&P\left(a_{1}\right)\left(a_{1}-a\right)a_{1}^{m-1}&Q\left(a_{1} \right)&Q\left(a_{1}\right)a_{1}&\ldots&Q\left(a_{1}\right)a_{1}^{n}\\ \vdots&&\vdots&\ddots&\vdots&\vdots&\vdots&\ddots&\vdots\\ P\left(a_{l}\right)\left(a_{l}-a\right)&P\left(a_{l}\right)\left(a_{l}-a\right) a_{l}&\ldots&P\left(a_{l}\right)\left(a_{l}-a\right)a_{l}^{m-1}&Q\left(a_{l} \right)&Q\left(a_{l}\right)a_{l}&\ldots&Q\left(a_{l}\right)a_{l}^{n}\end{array}\right]\]
If \(m=0\) i.e. \(Q\) is a constant function, then \(\Delta=\det A=Q\left(a_{0}\right)^{n+1}\prod_{i=0}^{n-1}\prod_{j=i+1}^{n} \left(a_{j}-a_{i}\right)=Q(a)\cdot\operatorname{res}(P,Q)\prod_{i=0}^{l-1} \prod_{j=i+1}^{l}\left(a_{j}-a_{i}\right)\). Suppose that \(m\geqslant 1\), then by applying elementary column transformations, we obtain that \(\Delta=\det B\). Define linear polynomial function \(L_{a}:k\to k\) by \(L_{a}(b)=b-a\), then by observation \(B\) is the product of the Vandermonde matrix \((a_{j}^{i})_{j=0,\ldots,l}^{i=0,\ldots,l}\) with the Sylvester matrix of \(L_{a}P\) and \(Q\). By the universal property of the resultant, we have \(\operatorname{res}(L_{a}P,Q)=\operatorname{res}(P,Q)\operatorname{res}\left(L_ {a},Q\right)=Q(a)\cdot\operatorname{res}(P,Q)\). Therefore we conclude that \(\det B=Q(a)\cdot\operatorname{res}(P,Q)\prod_{i=0}^{l-1}\prod_{j=i+1}^{l} \left(a_{j}-a_{i}\right)\).
**Proposition 4.5**.: _Let \(k\) be an algebraically closed field and \(f\colon k\dashrightarrow k\) be a nonzero rational function, then the following properties hold:_
1. \(n:=\deg f+\min\left\{0,\operatorname{ord}_{\infty}f\right\}\) _and_ \(m:=\deg f-\max\left\{0,\operatorname{ord}_{\infty}f\right\}\) _are non-negative integers,_
2. _for any_ \(l+2\) _elements_ \(a,a_{0},\ldots,a_{l}\) _of_ \(\operatorname{dom}f\)_, if_ \(a_{0},\ldots,a_{l}\) _are pairwise distinct and_ \(l=n+m\)_, then the determinants_ \[\alpha=\left|\begin{array}{cccccc}1&a&\ldots&a^{n}&0&0&\ldots&0\\ 1&a_{0}&\ldots&a_{0}^{n}&f\left(a_{0}\right)&f\left(a_{0}\right)a_{0}&\ldots&f \left(a_{0}\right)a_{0}^{m}\\ \vdots&\vdots&\ddots&\vdots&\vdots&\vdots&\ddots&\vdots\\ 1&a_{l}&\ldots&a_{l}^{n}&f\left(a_{l}\right)&f\left(a_{l}\right)a_{l}&\ldots&f \left(a_{l}\right)a_{l}^{m}\end{array}\right|\] \[\beta=\left|\begin{array}{cccccc}1&a&\ldots&a^{n}&0&0&\ldots&0 \\ f\left(a_{0}\right)&f\left(a_{0}\right)a_{0}&\ldots&f\left(a_{0}\right)a_{0}^{m }&1&a_{0}&\ldots&a_{0}^{n}\\ \vdots&\vdots&\ddots&\vdots&\vdots&\vdots&\ddots&\vdots\\ f\left(a_{l}\right)&f\left(a_{l}\right)a_{l}&\ldots&f\left(a_{l}\right)a_{l}^{m }&1&a_{l}&\ldots&a_{l}^{n}\end{array}\right|\] _satisfy that_ \(\beta\neq 0\) _and_ \(f(a)=(-1)^{nm}\alpha/\beta\)_._
Proof.: By the assumption on \(f\), there exists coprime polynomial function \(P,Q\colon k\to k\) such that \(Q(a)\neq 0\) and \(f=\dfrac{P(a)}{Q(a)}\) for all \(a\in\operatorname{dom}f\). By definition we have \(n=\deg P\geqslant 0\) and \(m=\deg Q\geqslant 0\).
By Lemma 4.4, we have \(\alpha\cdot\prod_{i=0}^{l}Q\left(a_{i}\right)=P(a)\cdot\operatorname{res}(Q,P) \prod_{i=0}^{l-1}\prod_{j=i+1}^{l}\left(a_{j}-a_{i}\right)\) and \(\beta\cdot\prod_{i=0}^{l}Q\left(a_{i}\right)=Q(a)\cdot\operatorname{res}(Q,P) \prod_{i=0}^{l-1}\prod_{j=i+1}^{l}\left(a_{j}-a_{i}\right)\). Since \(a_{0},\ldots,a_{l}\) are distinct and \(P\) is coprime to \(Q\), we obtain that \(\prod_{i=0}^{l-1}\prod_{j=i+1}^{l}\left(a_{j}-a_{i}\right)\neq 0\) and \(\operatorname{res}(P,Q)\neq 0\). Since \(a,a_{0},\ldots,a_{l}\in\operatorname{dom}f\), we have that \(Q(a)\neq 0\), and \(Q\left(a_{i}\right)\neq 0\) for all \(i=0,\ldots,l\). Therefore \(\beta\neq 0\) and \(\dfrac{\alpha}{\beta}=\dfrac{\operatorname{res}(Q,P)}{\operatorname{res}(P,Q)} \dfrac{P(a)}{Q(a)}=(-1)^{nm}f(a)\).
**Theorem 4.6**.: _Let \(X\) be a normal variety over an algebraically closed field \(k\) of cardinality exceeding continuum and let \(f\colon X\times k^{1}\dashrightarrow k\) be a partial function defined on a nonempty Zariski open subset of \(X\times k^{1}\), if \(f_{a}\) and \(f^{b}\) are rational functions for all (a,b) \(\in\operatorname{dom}f\), then \(f\) is a rational function._
Proof.: If \(f|_{\operatorname{dom}f}\equiv 0\) then the statement holds trivially. For the subsequent proof we shall assume that \(f\) is not constant zero on its domain of definition. Therefore up to a translation we can assume w.l.o.g. that \(f^{0}\colon X\dashrightarrow k\) is a nonzero rational function. By virtue of Corollary 3.2, we can assume w.l.o.g that \(X=\left\{a\in\operatorname{dom}f^{0}\mid f(a,0)\neq 0\right\}\). In particular, we have that \(f_{a}\colon k\dashrightarrow k\) is a nonzero rational function for each \(a\in X\).
For any \(d\in\mathbb{N}\) and \(e\in\mathbb{Z}\) define \(\Lambda_{d}^{e}:=\{a\in X\mid\deg f_{a}=d,\,\operatorname{ord}_{\infty}\,(f_{a} )=e\}\), then \(X=\coprod_{e\in\mathbb{Z}}\coprod_{d=0}^{\infty}\Lambda_{d}^{e}\). By Proposition 4.3 there exists \((d,e)\in\mathbb{N}\times\mathbb{Z}\) and a Zariski dense subset \(\Lambda\) of \(X\), such that \(\deg f_{a}=d\) and \(\operatorname{ord}_{\infty}\,(f_{a})=e\) for all \(a\in\Lambda\), and the cardinality of \(\Lambda\) is continuum. Denote \(n:=d+\min\{0,l\},m:=d-\max\{0,l\}\) and \(l:=n+m\), then by Proposition 4.5 we have \(n,m,l\in\mathbb{N}\).
Since \(\{\operatorname{dom}f_{a}\}_{a\in\Lambda}\) is a continuum family of cofinite subsets of \(k^{1}\) and the cardinality of \(k\) exceeds continuum, we conclude that \(S:=\bigcap_{a\in\Lambda}\operatorname{dom}f_{a}\) is infinite, and hence Zariski dense in \(k^{1}\). In particular, there exists distinct elements \(b_{0},\dots,b_{l}\in S\). Again by virtue of Corollary 3.2, we can assume w.l.o.g. that \(X=\bigcap_{l=0}^{l}\operatorname{dom}f^{b_{l}}\).
Define the regular functions \(\phi\colon X\times k^{1}\to k\) and \(\psi\colon X\times k^{1}\to k\) by
\[\phi(a,b):=\left|\begin{array}{cccccccc}1&b&\dots&b^{n}&0&0&\dots&0\\ 1&b_{0}&\dots&b_{0}^{n}&f\,(a,b_{0})&f\,(a,b_{0})\,b_{0}&\dots&f\,(a,b_{0})\,b _{0}^{m}\\ \vdots&\vdots&\ddots&\vdots&\vdots&\ddots&\vdots\\ 1&b_{l}&\dots&b_{l}^{n}&f\,(a,b_{l})&f\,(a,b_{l})\,b_{l}&\dots&f\,(a,b_{l})\,b _{l}^{m}\\ \end{array}\right|\]
Take any \(a\in\Lambda\), then by Proposition 4.5\(\psi_{a}(b)\neq 0\) and \(f_{a}(b)=(-1)^{nm}\phi_{a}(b)/\psi_{a}(b)\) for all \(b\in S\subseteq\operatorname{dom}f_{a}\).
Define rational function \(g\colon X\times k\dashrightarrow k\) by \(g(a,b):=\frac{\phi(a,b)}{\psi(a,b)}\) on \(\operatorname{dom}g:=\{(a,b)\in\operatorname{dom}f\mid\psi_{a}(b)\neq 0\}\), then \(\Lambda\times S\subseteq\operatorname{dom}g\) and by construction \(f(a,b)=g(a,b)\) for all \((a,b)\in\Lambda\times S\). Recall that \(\Lambda,S\) are Zariski dense subsets of \(X,k^{1}\) respectively, the desired result then follows from Proposition 3.3.
**Corollary 4.7**.: _Let \(k\) be an algebraically closed field of cardinality exceeding continuum, and let \(f\colon k^{n}\dashrightarrow k\) be a partial function defined on a nonempty Zariski open subset of \(k^{n}\), if \(f^{a_{i+1},\dots,a_{n}}_{a_{1},\dots,a_{l-1}}\colon k\dashrightarrow k\) is a rational function for every \((a_{1},\dots,a_{n})\in\operatorname{dom}f\) and \(i\in\{1,\dots,n\}\), then \(f\) is a rational function._
Proof.: Recall Theorem 4.6 and apply mathematical induction on \(n\in\mathbb{N}\).
| ```japanese
$K$ を不 countabeAlgebraically closed な体とする。$f$ が $K \times \dots \times K$ の Zariski 開集合の部分定義された関数で、各座標が рационар であるが他の座標は定数であるとする。このとき、$f$ は $n$ 変数での рационаFunctionName になる。
``` |
2303.02821 | Susceptibility of a single photon wave packet | The explicit compact expression for the susceptibility tensor of a single
photon wave packet on the photon mass-shell is derived. It is assumed that the
probe photon is hard, the test photon is soft, and their total energy is below
the electron-positron pair creation threshold. It turns out that a single
photon wave packet can be regarded as a birefringent gyrotropic dispersive
medium in the process of light-by-light scattering. The explicit expression for
the inclusive probability to record the probe photon in the process of
light-by-light scattering is obtained in the first nontrivial order of
perturbation theory where the interference effect of the free passed and
scattered parts of the photon wave function dominates. This effect is of order
$\alpha^2$ in contrast to the standard contribution to the light-by-light
scattering cross-section which is of order $\alpha^4$. The possible nontrivial
shapes of the wave functions of probe and test photons are taken into account.
The evolution of the Stokes parameters of a probe photon is described. The
change of the Stokes parameters is rather large for hard probe photons and
sufficiently intense beams of soft test photons. | P. O. Kazinski, T. V. Solovyev | 2023-03-06T01:25:21 | http://arxiv.org/abs/2303.02821v3 | # Susceptibility of a single photon wave packet
###### Abstract
The explicit compact expression for the susceptibility tensor of a single photon wave packet on the photon mass-shell is derived. It is assumed that the probe photon is hard, the test photon is soft, and their total energy is below the electron-positron pair creation threshold. It turns out that a single photon wave packet can be regarded as a birefringent gyrotropic dispersive medium in the process of light-by-light scattering. The explicit expression for the inclusive probability to record the probe photon in the process of light-by-light scattering is obtained in the first nontrivial order of perturbation theory where the interference effect of the free passed and scattered parts of the photon wave function dominates. This effect is of order \(\alpha^{2}\) in contrast to the standard contribution to the light-by-light scattering cross-section which is of order \(\alpha^{4}\). The possible nontrivial shapes of the wave functions of probe and test photons are taken into account. The evolution of the Stokes parameters of a probe photon is described. The change of the Stokes parameters is rather large for hard probe photons and sufficiently intense beams of soft test photons.
## 1 Introduction
The study of the properties inherent to elementary particles such as mass, spin, charges, magnetic and dipole moments, and others is one of the fundamental problems of physics. It was shown in the paper [1] that another one such characteristics of particles is their susceptibility. Staying in line with traditions of classical physics, it appears at first sight that the susceptibility is a property of a group of particles or of particles with nontrivial internal structure. Nevertheless, as was shown in [1], the susceptibility tensor can be defined, evaluated, and measured experimentally for the wave packet of a single electron. In the present paper, we continue the investigation of susceptibilities of elementary particles and find the susceptibility tensor for the wave packet of a single photon on the photon mass-shell.
The simplest way to calculate the susceptibility tensor of a single photon wave packet could be in the use of the Heisenberg-Euler Lagrangian [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]. This is the most common method to describe the light-by-light scattering process that allows one to obtain the effective susceptibility of a beam of photons or of a macroscopic electromagnetic field. However, in applying this procedure to derivation of the susceptibility tensor of a single photon, it is not immediately clear what should be taken as a background field since the average values of the electromagnetic field operator over Fock states vanish. Another drawback of this approach is that it implies the total energy of the probe and test photons is much less than the electron-positron pair creation threshold and it is not applicable near this threshold. It is known (see, e.g., [14]) that the light-by-light cross section strongly depends on the energies of scattered particles and it rapidly increases near the electron-positron pair creation threshold. The second nonperturbative method to find the susceptibility tensor is to employ the exact expression for the photon polarization tensor on a strong plane wave electromagnetic background [15, 16, 17, 18, 19]. This approach is restricted to plane electromagnetic test waves and, as in the case of the Heisenberg-Euler effective action, is not immediately applicable to a wave packet of a single test photon. Therefore, in the present paper we stick to the standard perturbative approach for description of light-by-light scattering [20, 21, 22, 23, 24] fully taking into account the shapes of the wave functions of probe and test photons.
By now there are papers where the influence of profiles of the wave packets of scattered photons on various aspects of the light-by-light scattering was studied [6, 7, 8, 9, 10, 11, 12, 13, 17, 18, 19, 25]. Nevertheless, as far as we known, the expression for the susceptibility tensor of a single photon wave packet and of a beam of photons of a general
profile has not been found in a closed and concise form. In the present paper we fill this gap. Furthermore, we derive the explicit expression for the inclusive probability to record a probe photon in the light-by-light scattering taking into account the interference of the free passed part of the probe photon wave function with its scattered part. This interference effect stems from a change of the probe photon wave function in scattering on an effective medium with susceptibility tensor of the test photon or of a beam of such photons. The inclusive probability depends on the nontrivial structure of the states of probe and test photons. Under certain approximations, the expression for this probability implies a simple equation for evolution of the Stokes parameters of the probe photon that generalizes the relations obtained in [26, 27, 28, 29, 30]. It turns out that the evolution of the Stokes parameters of the probe photon depends severely on the shape of its wave packet, in particular, on the presence of imaginary part of the density matrix of its state in the momentum space. Unpolarized states of probe photons possessing the density matrix with nonzero imaginary part become polarized as a result of scattering on test photons. The magnitude of the interference effect and, respectively, a change of the Stokes parameters can be quite substantial for scattering of the hard probe photon on the intense laser beam when the Mandelstam variable \(s\approx 4m^{2}\), where \(m\) is the electron mass. This effect can be observed on the existing and planned experimental facilities where the profile of the wave packet of a probe photon and its polarization can be controlled [31, 32, 33, 34, 35].
The paper is organized as follows. In Sec. 2, the general formula for the inclusive probability to record a probe photon scattered by a test photon is given. Section 3 is devoted to derivation of the concise explicit expression for the susceptibility tensor of a single photon wave packet. Here we also provide the estimates for the order of magnitude of this quantity in different regimes. In the next Sec. 4, we simplify the general expression for the inclusive probability and describe the evolution of the Stokes parameters of the probe photon. In Conclusion, we summarize the results. Some calculations arising in evaluating the inclusive probability are removed to Appendix A. In Appendix B, we generalize the expression for the susceptibility tensor of a single electron wave packet obtained in [1] to a nonstationary case.
We follow the notation adopted in [1]. The Greek indices \(\alpha\), \(\beta\), \(\bar{\alpha}\), \(\bar{\beta}\), \(\ldots\) denote the quantum numbers of particle states. The Greek \(\mu\) is the space-time index taking the values \(\overline{0,3}\) and the Latin \(i\), \(j\) are the spatial indices. The Greek \(\lambda=\pm 1\) specifies the circular polarization, whereas \(l,l^{\prime}=\{1,2\}\) are for the linear polarization. The summation (integration) over repeated indices is always understood unless otherwise stated. We also suppose that the quantum states of particles are normalized to unity in some sufficiently large volume \(V\). The complex conjugation is denoted by the bar over the symbol. Furthermore, wherever it does not lead to misunderstanding, we use the matrix notation. For example,
\[\bar{a}a\equiv\bar{a}_{\alpha}a_{\alpha}\equiv\sum_{\alpha}\bar{a}_{\alpha}a _{\alpha},\qquad\bar{d}Dd\equiv\bar{d}_{\bar{\alpha}}D_{\bar{\alpha}\alpha}d_ {\alpha}=\sum_{\bar{\alpha},\alpha}\bar{d}_{\bar{\alpha}}D_{\bar{\alpha} \alpha}d_{\alpha},\qquad\mbox{etc.} \tag{1}\]
The operators acting in the Fock space are denoted by letters with carets. We use the system of units such that \(\hbar=c=1\) and \(e^{2}=4\pi\alpha\), where \(\alpha\) is the fine structure constant. The Minkowski metric is taken with the mostly minus signature.
## 2 General formulas
Consider the process of an elastic scattering of a photon by a photon in the leading nontrivial order of perturbation theory. As the initial state of photons at \(t=t_{1}\rightarrow-\infty\), we take the coherent state defined by the density matrix
\[\hat{R}_{ph}=|d\rangle\langle\bar{d}|e^{-\bar{d}d}, \tag{2}\]
where \(d_{\alpha}\) is the complex amplitude of the coherent state at the instant of time \(t_{1}\). We suppose that the quantum numbers \(\alpha\) contain the particle energy and
\[d_{\alpha}=s_{\alpha}+h_{\alpha}, \tag{3}\]
where \(s_{\alpha}\) describes the state of the laser beam comprised of low energy photons and \(h_{\alpha}\) determines the state of hard probe photons. Furthermore, we assume that the total energy of any two photons from the state \(\hat{R}_{ph}\) is not enough to create an electron-positron pair, i.e., \(s<4m^{2}\). The initial state of the whole system takes the form
\[\hat{R}=\hat{R}_{ph}\otimes|0\rangle_{e^{-}}\langle 0|_{e^{-}}\otimes|0 \rangle_{e^{+}}\langle 0|_{e^{+}}, \tag{4}\]
where \(|0\rangle_{e^{-}}\) is the vacuum state of electrons and \(|0\rangle_{e^{+}}\) is the vacuum state of positrons.
In order to define the quantum measurement in the final state at \(t=t_{2}\rightarrow+\infty\), we introduce the projectors
\[\hat{\tilde{\Pi}}_{D}:=1-\hat{\Pi}_{D},\qquad\hat{\Pi}_{D}:=:\exp(-\hat{c}^{ \dagger}D\hat{c}):, \tag{5}\]
where \(D=D^{\dagger}\) is the projector in the one-particle Hilbert space. The projector \(\hat{\tilde{\Pi}}_{D}\) singles out the states in the Fock space that contain at least one photon with quantum numbers specified by the projector \(D\) due to the fact that
\[(D\hat{c})_{\alpha}\hat{\Pi}_{D}=\hat{\Pi}_{D}(\hat{c}^{\dagger}D)_{\alpha}=0. \tag{6}\]
Then the inclusive probability to record a photon by the detector at the instant of time \(t_{2}\) reads
\[P_{D}=\mbox{Sp}(\hat{R}\hat{U}_{t_{1},t_{2}}\hat{\tilde{\Pi}}_{D}\hat{U}_{t_{2 },t_{1}})=\mbox{Sp}(\hat{R}(t_{1})\hat{S}_{t_{1},t_{2}}\hat{\tilde{\Pi}}_{D(t_ {2})}\hat{S}_{t_{2},t_{1}}), \tag{7}\]
where
\[D_{\hat{\alpha}\alpha}(t_{2})=D_{\hat{\alpha}\alpha}e^{i(k_{0\alpha}-k_{0 \alpha})t_{2}}, \tag{8}\]
and \(\hat{R}(t_{1})\) has the form (2), (4), where one should substitute
\[d_{\alpha}\to d_{\alpha}(t)\Big{|}_{t=0}=d_{\alpha}e^{it_{1}k_{0 \alpha}}. \tag{9}\]
In expression (7), we have also introduced the standard notation for the evolution operator \(\hat{U}_{t_{2},t_{1}}\) and the \(S\)-operator.
Further we assume that \(D_{\hat{\alpha}\alpha}\) is diagonal with respect to the photon energy and, consequently, \(D_{\hat{\alpha}\alpha}(t_{2})=D_{\tilde{\alpha}\alpha}\). Moreover, it is convenient to specify the form of the complex amplitude \(d_{\alpha}\) at \(t=0\) and not at the initial instant of time \(t_{1}\). Then \(d_{\alpha}\) taken at the initial instant of time is found by reversing formula (9). Henceforth, for brevity, we denote the complex amplitude of the coherent state at the instant of time \(t=0\) as \(d_{\alpha}\). Bearing this in mind and taking the limits \(t_{2}\rightarrow+\infty\), \(t_{1}\rightarrow-\infty\), we obtain
\[P_{D}=\mbox{Sp}(\hat{R}\hat{S}^{\dagger}\hat{\tilde{\Pi}}_{D}\hat{S}), \tag{10}\]
where \(\hat{S}\) is the operator of the \(S\)-matrix.
When the aforementioned restrictions on the energies of photons in the initial state are satisfied, only the process of light-by-light scattering may happen in the leading order of perturbation theory. Then the \(S\)-matrix becomes
\[\hat{S}=1+\hat{C}+\cdots, \tag{11}\]
where the operator
\[\hat{C}=\hat{c}^{\dagger}_{\hat{\alpha}}\hat{c}^{\dagger}_{\hat{\beta}}C_{ \tilde{\alpha}\beta\alpha\beta}\hat{c}_{\alpha}\hat{c}_{\beta} \tag{12}\]
describes the light-by-light scattering in the leading order of perturbation theory. In virtue of unitarity of the \(S\)-matrix,
\[\hat{C}^{\dagger}=-\hat{C}, \tag{13}\]
in the given order of perturbation theory and domain of parameters. Therefore,
\[C_{\tilde{\alpha}\tilde{\beta}\alpha\beta}=C_{\tilde{\beta}\tilde{\alpha} \alpha\beta}=C_{\tilde{\alpha}\tilde{\beta}\beta\alpha}=-C^{*}_{\alpha\beta \delta\bar{\beta}}. \tag{14}\]
At the same order of perturbation theory,
\[P_{D}=\mbox{Sp}(\hat{R}_{ph}\hat{\tilde{\Pi}}_{D})+\big{[}\,\mbox{Sp}(\hat{R} _{ph}\hat{\tilde{\Pi}}_{D}\hat{C})+c.c.\big{]}+\cdots. \tag{15}\]
The traces of operators appearing in this expression are readily evaluated (see Appendix A)
\[\begin{split}\mbox{Sp}(\hat{R}_{ph}\hat{\tilde{\Pi}}_{D})& =1-e^{-\tilde{d}Dd},\\ \mbox{Sp}(\hat{R}_{ph}\hat{\tilde{\Pi}}_{D}\hat{C})& =\big{[}\bar{d}_{\tilde{\alpha}}\bar{d}_{\tilde{\beta}}-(\bar{d} \tilde{D})_{\tilde{\alpha}}(\bar{d}\tilde{D})_{\tilde{\beta}}e^{-\bar{d}Dd} \big{]}C_{\tilde{\alpha}\tilde{\beta}\alpha\beta}d_{\alpha}d_{\beta},\end{split} \tag{16}\]
where \(\tilde{D}_{\tilde{\alpha}\alpha}:=\delta_{\tilde{\alpha}\alpha}-D_{\tilde{ \alpha}\alpha}\).
We assume that
\[(Ds)_{\alpha}=0, \tag{17}\]
i.e., the detector does not record soft photons \(s_{\alpha}\). In this case,
\[(\bar{d}\bar{D})_{\alpha}=\bar{s}_{\alpha}+(\bar{h}\bar{D})_{\alpha},\qquad\bar{ d}Dd=\bar{h}Dh. \tag{18}\]
Furthermore, we suppose that the state of hard photons is close to a one particle Fock state and so we seek for a leading nontrivial contribution to (15) in limit \(h_{\alpha}\to 0\). Then
\[P_{D}=\bar{h}Dh+\Big{\{}\big{[}(\bar{h}Dh)\bar{s}_{\bar{\alpha}}\bar{s}_{\bar{ \beta}}s_{\alpha}s_{\beta}+\big{(}\bar{h}_{\bar{\alpha}}\bar{h}_{\bar{\beta}}- (\bar{h}\bar{D})_{\bar{\alpha}}(\bar{h}\bar{D})_{\bar{\beta}}+2\bar{s}_{\bar{ \alpha}}(\bar{h}D)_{\bar{\beta}}\big{)}s_{\alpha}s_{\beta}+4\bar{s}_{\bar{ \alpha}}(\bar{h}D)_{\bar{\beta}}s_{\alpha}h_{\beta}\big{]}C_{\bar{\alpha}\bar{ \beta}\alpha\beta}+c.c.\Big{\}}+\cdots. \tag{19}\]
It follows from the property (14) that the first term in the square brackets in purely imaginary. Hence its contribution is equal to zero. The next terms embraced by the parentheses vanish due to the energy conservation law and the assumption that the energies of photons in the state \(s_{\alpha}\) are small in comparison with the energies of photons in the state \(h_{\alpha}\). Then within the order of perturbation theory we consider, one deduces
\[P_{D}=\bar{h}_{t}Dh_{t},\qquad(h_{t})_{\bar{\beta}}=h_{\bar{\beta}}+\Phi_{ \bar{\beta}\bar{\beta}}h_{\beta}, \tag{20}\]
where
\[\Phi_{\bar{\beta}\beta}:=4\bar{s}_{\bar{\alpha}}C_{\bar{\alpha}\bar{\beta} \alpha\beta}s_{\alpha}. \tag{21}\]
Formula (20) says that the detector records photons in the state \((h_{t})_{\bar{\beta}}\) that results from scattering of photons in the state \(h_{\beta}\) by the photons in the laser beam described by the state \(s_{\alpha}\). In the case of a mixed initial state of probe photons with the density matrix \(\rho_{\beta\beta^{\prime}}\), the inclusive probability (20) is written as
\[P_{D}=D_{\beta^{\prime}\bar{\beta}}(\delta_{\beta\beta}+\Phi_{\beta\beta})( \delta_{\beta^{\prime}\beta^{\prime}}+\bar{\Phi}_{\bar{\beta}^{\prime}\beta^{ \prime}})\rho_{\beta\beta^{\prime}}. \tag{22}\]
Notice that formulas (15), (19), (20), and (22) do not contain the standard contribution defining the differential cross-section of light-by-light scattering because it is of a higher order with respect to the coupling constant. The standard contribution becomes the leading one in the domain of quantum numbers \(\bar{\beta}\) where \((Dh)_{\bar{\beta}}\) is negligible.
## 3 Susceptibility of a photon
The above general formulas allow one to deduce the susceptibility of a single photon wave packet on the mass-shell and to find the inclusive probability to record a photon scattered by other photon or by the laser beam of photons. The amplitude of light-by-light scattering is given in [14, 20, 21, 22, 23, 24]. In our notation,
\[\begin{gathered}\alpha=(\lambda_{1},\mathbf{k}_{1}),\qquad\beta=( \lambda_{2},\mathbf{k}_{2}),\qquad\bar{\alpha}=(\lambda_{3},\mathbf{k}_{3}), \qquad\bar{\beta}=(\lambda_{4},\mathbf{k}_{4}),\\ \sum_{\beta}=\sum_{\lambda_{2}}\int\frac{Vd\mathbf{k}_{2}}{(2\pi) ^{3}},\qquad h_{\beta}=\sqrt{\frac{(2\pi)^{3}}{V}}h_{\lambda_{2}}(\mathbf{k}_ {2}).\end{gathered} \tag{23}\]
The normalization condition takes the form
\[\sum_{\alpha}\bar{h}_{\alpha}h_{\alpha}=\sum_{\lambda}\int d\mathbf{k}|h_{ \lambda}(\mathbf{k})|^{2}=1,\qquad\sum_{\lambda}\int d\mathbf{k}|s_{\lambda}( \mathbf{k})|^{2}=N_{s}, \tag{24}\]
where \(N_{s}\) is the average number of photons in the beam \(s_{\alpha}\). The circular polarization vectors are defined as [20, 21, 22, 23, 24]
\[\mathbf{e}^{(\lambda)}(\mathbf{k})=\frac{1}{\sqrt{2}}(\mathbf{e}_{1}(\mathbf{k })+i\lambda\mathbf{e}_{2}(\mathbf{k})), \tag{25}\]
where \(\lambda=\pm 1\), the linear polarization vector \(\mathbf{e}_{1}(\mathbf{k})\) is perpendicular to the reaction plane, the linear polarization vector \(\mathbf{e}_{2}(\mathbf{k})\) lies in the reaction plane, and \(\big{\{}\mathbf{e}_{1}(\mathbf{k}),\mathbf{e}_{2}(\mathbf{k}),\mathbf{k}\big{\}}\) constitute a right-handed triple.
In this case,
\[\Phi_{\bar{\beta}\beta}=\frac{\pi i}{2V}\sum_{\lambda_{1},\lambda_{3}}\int d \mathbf{k}_{1}d\mathbf{k}_{3}\delta(k_{3}+k_{4}-k_{1}-k_{2})\bar{s}_{\lambda_{ 3}}(\mathbf{k}_{3})s_{\lambda_{1}}(\mathbf{k}_{1})\frac{M_{\lambda_{3}\lambda_{ 4}\lambda_{1}\lambda_{2}}(s,t,u)}{\sqrt{k_{0}(\mathbf{k}_{1})k_{0}(\mathbf{k}_ {2})k_{0}(\mathbf{k}_{3})k_{0}(\mathbf{k}_{4})}}, \tag{26}\]
where
\[\begin{gathered} s=(k_{1}+k_{2})^{2}=(k_{3}+k_{4})^{2}=2k_{1}k_{2} =2k_{3}k_{4},\qquad t=(k_{1}-k_{3})^{2}=(k_{2}-k_{4})^{2}=-2k_{1}k_{3}=-2k_{2} k_{4},\\ u=(k_{1}-k_{4})^{2}=(k_{2}-k_{3})^{2}=-2k_{1}k_{4}=-2k_{2}k_{3}. \end{gathered} \tag{27}\]
In particular,
\[s=k_{3}^{0}k_{4}^{0}({\bf n}_{3}-{\bf n}_{4})^{2}, \tag{28}\]
where \({\bf n}_{3,4}:={\bf k}_{3,4}/|{\bf k}_{3,4}|\). It is clear that \(s+t+u=0\). Integrating over the spatial momenta \({\bf k}_{1}\) in (26), we arrive at
\[\Phi_{\bar{\beta}\beta}=\frac{\pi i}{2V}\sum_{\lambda_{1},\lambda_{3}}\int d{ \bf k}_{3}\delta(k_{3}^{0}+k_{4}^{0}-k_{1}^{0}-k_{2}^{0})\bar{s}_{\lambda_{3}}( {\bf k}_{3})s_{\lambda_{1}}({\bf k}_{1})\frac{M_{\lambda_{3}\lambda_{4}\lambda_ {1}\lambda_{2}}(s,t,u)}{\sqrt{|{\bf k}_{1}||{\bf k}_{2}||{\bf k}_{3}||{\bf k}_{ 4}|}}\Big{|}_{{\bf k}_{1}={\bf k}_{3}+{\bf k}_{4}-{\bf k}_{2}}. \tag{29}\]
Introduce the notation,
\[s_{\lambda}({\bf k};x^{0}):=e^{-ik_{0}({\bf k})x^{0}}s_{\lambda}({\bf k}), \tag{30}\]
and write the delta function expressing the energy conservation law as a Fourier transform. Then we have
\[\Phi_{\bar{\beta}\beta}=i\sum_{\lambda_{1},\lambda_{3}}\int\!\frac{d{\bf k}_{ 3}dx^{0}}{4V}e^{i(k_{4}^{0}-k_{2}^{0})x^{0}}\bar{s}_{\lambda_{3}}({\bf k}_{3}; x^{0})s_{\lambda_{1}}({\bf k}_{1};x^{0})\frac{M_{\lambda_{3}\lambda_{4} \lambda_{1}\lambda_{2}}(s,t,u)}{\sqrt{|{\bf k}_{1}||{\bf k}_{2}||{\bf k}_{3}||{ \bf k}_{4}|}}\Big{|}_{{\bf k}_{1}={\bf k}_{3}+{\bf k}_{4}-{\bf k}_{2}}. \tag{31}\]
In order to find the susceptibility tensor of a single photon wave packet on the mass-shell, we compare the scattering amplitude of the hard probe photon \((h_{t})_{\bar{\beta}}\) with the amplitude of scattering by a medium with a certain susceptibility tensor \(\chi_{ij}\) in the first Born approximation. Let the medium possess the susceptibility tensor
\[\chi_{ij}\big{(}\frac{x+y}{2},x-y\big{)}, \tag{32}\]
where \(x=(x^{0},{\bf x})\). The dependence of \(\chi_{ij}\) on \((x+y)/2\) is supposed to be slow. The second argument of \(\chi_{ij}\) characterizes the frequency and spatial dispersions, and \(\chi_{ij}\) is a rapidly varying function of this argument. Then, in the first Born approximation, the amplitude of scattering of a photon by the medium with such a susceptibility tensor becomes (see, e.g., [36, 37])
\[S_{\gamma^{\prime}\gamma}=\delta_{\gamma^{\prime}\gamma}+i\frac{k_{0\gamma^{ \prime}}^{k_{0}^{1/2}k_{0\gamma}^{1/2}}}{2V}\int\!d^{4}x\bar{e}_{i}^{(\lambda^ {\prime})}({\bf k}^{\prime})\chi_{ij}(x;K)e_{j}^{(\lambda)}({\bf k})e^{i(k_{0 \gamma^{\prime}}-k_{0\gamma})x^{0}-i({\bf k}^{\prime}-{\bf k}){\bf x}}, \tag{33}\]
where \(K_{\mu}:=(k_{\mu}^{\prime}+k_{\mu})/2\) and
\[\chi_{ij}(x;K):=\int\!d^{4}ze^{iK_{\mu}z^{\mu}}\chi_{ij}(x,z). \tag{34}\]
It is useful to write (33) as
\[S_{\gamma^{\prime}\gamma}=\delta_{\gamma^{\prime}\gamma}+i\frac{k_{0\gamma^{ \prime}}^{k_{0}^{1/2}k_{0\gamma}^{1/2}}}{2V}\int\!dx^{0}\bar{e}_{i}^{(\lambda^ {\prime})}({\bf k}^{\prime})\tilde{\chi}_{ij}(x^{0},\Delta{\bf k};K)e_{j}^{( \lambda)}({\bf k})e^{i(k_{0\gamma^{\prime}}-k_{0\gamma})x^{0}}, \tag{35}\]
where \(\Delta{\bf k}:={\bf k}^{\prime}-{\bf k}\) and we have introduce the notation for the Fourier transform of the susceptibility tensor with respect to the slowly varying spatial argument. Comparing (31) with (35), we obtain
\[\tilde{\chi}_{ij}(x^{0},\Delta{\bf k};K)=\sum_{\lambda_{1},\lambda_{3}}\int \!\frac{d{\bf k}_{3}\bar{s}_{\lambda_{3}}({\bf k}_{3};x^{0})s_{\lambda_{1}}({ \bf k}_{3}+\Delta{\bf k};x^{0})}{2|{\bf k}_{4}||{\bf k}_{2}||{\bf k}_{3}||^{2} |{\bf k}_{3}||^{2}|{\bf k}_{3}+\Delta{\bf k}|^{1/2}}M_{\lambda_{3}\lambda_{4} \lambda_{1}\lambda_{2}}e_{i}^{(\lambda_{4})}({\bf k}_{4})\bar{e}_{j}^{(\lambda _{2})}({\bf k}_{2}), \tag{36}\]
where
\[{\bf k}_{2}={\bf K}-\frac{\Delta{\bf k}}{2},\qquad{\bf k}_{4}={\bf K}+\frac{ \Delta{\bf k}}{2}. \tag{37}\]
Formula (36) gives the general expression for the on-shell susceptibility tensor of photons in the state \(s_{\alpha}\).
Let us simplify expression (36). Recall that [20, 21, 22, 23, 24]
\[M_{\lambda_{1}\lambda_{2}\lambda_{3}\lambda_{4}}=M_{-\lambda_{1},-\lambda_{2},- \lambda_{3},-\lambda_{4}},\qquad M_{\lambda_{1}\lambda_{2}\lambda_{3}\lambda_{4 }}=M_{\lambda_{3}\lambda_{4}\lambda_{1}\lambda_{2}},\qquad M_{\lambda_{1} \lambda_{2}\lambda_{3}\lambda_{4}}=M_{\lambda_{2}\lambda_{1}\lambda_{4}\lambda_{3}}. \tag{38}\]
In the limit of a small momentum transfer, \(|t|\ll 4m^{2}\), \(|t|\ll s\), the nonvanishing independent amplitudes are written as
\[M_{++++}(s)=M_{+-+-}(-s)=8\alpha^{2}f(s),\qquad M_{++--}(s)=-8\alpha^{2}g(s), \tag{39}\]
where
\[f(s) =-\Big{[}1+\Big{(}2-\frac{4}{s^{\prime}}\Big{)}B(s^{\prime})+ \Big{(}-4+\frac{4}{s^{\prime}}\Big{)}B(-s^{\prime})+\Big{(}\frac{4}{s^{\prime}}- \frac{8}{s^{\prime 2}}\Big{)}T(s^{\prime})+\Big{(}2-\frac{4}{s^{\prime}}-\frac{8}{s^{ \prime 2}}\Big{)}T(-s^{\prime})\Big{]}_{s^{\prime}\to s/m^{2}}, \tag{40}\] \[g(s) =-\Big{[}1+\frac{4}{s^{\prime}}B(s^{\prime})-\frac{4}{s^{\prime}}B(- s^{\prime})+\frac{8}{s^{\prime 2}}T(s^{\prime})+\frac{8}{s^{\prime 2}}T(-s^{\prime})\Big{]}_{s^{\prime}\to s/m^{2}},\]
\[B(s)=\sqrt{1-\frac{4}{s}}\,{\rm arsh}\,\frac{\sqrt{-s}}{2}-1=\sqrt{\frac{4}{s}-1} \arcsin\frac{\sqrt{s}}{2}-1,\qquad T(s)={\rm arsh}^{2}\,\frac{\sqrt{-s}}{2}=- \arcsin^{2}\,\frac{\sqrt{s}}{2}, \tag{41}\]
where the principal branches of multivalued functions are taken and \(s\to s+i0.\) For \(s,\)\(|t|,\)\(|u|\) much less than \(4m^{2},\) the independent amplitudes become
\[\begin{split} M_{++++}&=\frac{11\alpha^{2}}{45m^{ 4}}s^{2},\qquad M_{+-+-}=\frac{11\alpha^{2}}{45m^{4}}u^{2},\\ M_{+--+}&=\frac{11\alpha^{2}}{45m^{4}}t^{2},\qquad M _{++--}=-\frac{\alpha^{2}}{15m^{4}}(s^{2}+t^{2}+u^{2}),\qquad M_{++-}=0.\end{split} \tag{42}\]
In the general case, the explicit expressions for \(M_{\lambda_{1}\lambda_{2}\lambda_{3}\lambda_{4}}\) are presented in [14, 20, 21, 22, 23, 24].
In order to proceed, we assume that the states \(s_{\alpha}\) and \(h_{\beta}\) are such that
\[|\Delta{\bf k}|\ll|{\bf k}_{3}|,\qquad|\Delta{\bf k}|\ll|{\bf k}_{4}|, \tag{43}\]
where \(\Delta{\bf k}:={\bf k}_{4}-{\bf k}_{2}.\) In fact, condition (43) means that the dispersion of momenta in the wave packet of a hard probe photon is much less than the average energy of modes in the state \(s_{\alpha}.\) The second condition in (43) follows from the first one inasmuch as, by assumption, the energy of the photon \(h_{\beta}\) is much higher than the energies of the modes in the state \(s_{\alpha}.\) In this case, \(s\approx|u|\ll 4m^{2}\) and \(|t|\ll s\) and so one can use formulas (39) for the invariant scattering amplitudes. In the leading order in \(\Delta{\bf k},\) we can discard \(\Delta{\bf k}\) in the integrand of (36) save the argument of the function \(s_{\lambda_{1}}({\bf k}_{3}+\Delta{\bf k}).\) In the general case, this function can vary rapidly even for a small deviation, \(\Delta{\bf k},\) of its argument. Then
\[s=|{\bf k}_{3}||{\bf K}|({\bf n}_{3}-{\bf n})^{2},\qquad{\bf n}:={\bf K}/|{\bf K }|. \tag{44}\]
Denoting concisely
\[s_{\lambda_{3}\lambda_{1}}:=\bar{s}_{\lambda_{3}}({\bf k}_{3};x^{0})s_{ \lambda_{1}}({\bf k}_{3}+\Delta{\bf k};x^{0}), \tag{45}\]
we obtain
\[\begin{split}\sum_{\lambda_{1},\lambda_{3}}s_{\lambda_{3}\lambda_ {1}}M_{\lambda_{3}\lambda_{4}\lambda_{1}\lambda_{2}}e_{i}^{(\lambda_{4})}({ \bf K})\bar{e}_{j}^{(\lambda_{2})}({\bf K})=& 8\alpha^{2}\Big{[}f_{s}(s)(s_{++}+s_{--})+f_{ a}(s)(s_{++}-s_{--})\sigma_{2}-\\ &-g(s)\frac{s_{-}+\,s_{-+}}{2}\sigma_{3}+g(s)\frac{s_{+--}-s_{-+}} {2i}\sigma_{1}\Big{]}_{ll^{\prime}}(e_{l})_{i}({\bf K})(e_{l^{\prime}})_{j}({ \bf K})=\\ =& 8\alpha^{2}\Big{[}f_{s}(s)(s_{11}+s_{22})+if_{ a}(s)(s_{21}-s_{12})\sigma_{2}-\\ &-g(s)\frac{s_{11}-s_{22}}{2}\sigma_{3}+g(s)\frac{s_{12}+s_{21}}{2} \sigma_{1}\Big{]}_{ll^{\prime}}(e_{l})_{i}({\bf K})(e_{l^{\prime}})_{j}({\bf K}) =\\ =& 8\alpha^{2}\Big{[}f_{s}(s)(s^{\dagger}\bar{s})+f_{ a}(s)(s^{\dagger}\sigma_{2}\bar{s})\sigma_{2}-\\ &-\frac{g(s)}{2}\big{(}(s^{\dagger}\sigma_{3}\tilde{s})\sigma_{3}-(s^ {\dagger}\sigma_{1}\tilde{s})\sigma_{1}\big{)}\Big{]}_{ll^{\prime}}(e_{l})_{i }({\bf K})(e_{l^{\prime}})_{j}({\bf K}),\end{split} \tag{46}\]
where the basis of linear polarization vectors has been used, in the last equality we have rewritten the foregoing expression with the aid of sigma matrices, \(\tilde{s}_{l}:=s_{l}({\bf k}_{3}+\Delta{\bf k};x^{0}),\) and
\[f_{s}(s):=[f(s)+f(-s)]/2,\qquad f_{a}(s):=[f(s)-f(-s)]/2. \tag{47}\]
Notice that \(f_{s}(s),\)\(f_{a}(s),\) and \(g(s)\) are monotonically increasing functions for \(s\in[0,4m^{2}]\) and are nonnegative on this interval. Moreover,
\[f_{s}(s)=\frac{11s^{2}}{360m^{4}}+\frac{13s^{4}}{21600m^{8}}+\cdots,\qquad f_{ a}(s)=\frac{s^{3}}{630m^{6}}+\frac{s^{5}}{17325m^{10}}+\cdots,\qquad g(s)= \frac{s^{2}}{60m^{4}}+\frac{s^{4}}{1890m^{8}}+\cdots, \tag{48}\]
and
\[\begin{split} f_{s}(4m^{2})=\frac{1}{2}\,{\rm arsh}^{2}\,1+\frac{ 3\pi^{2}}{8}-3\approx 1.0895,\qquad f_{a}(4m^{2})=3\sqrt{2}\,{\rm arsh}\,1-{\rm arsh}^{ 2}\,1-\frac{\pi^{2}}{4}\approx 0.495,\\ g(4m^{2})=\sqrt{2}\,{\rm arsh}\,1-\frac{1}{2}\,{\rm arsh}^{2}\,1+ \frac{\pi^{2}}{8}-1\approx 1.0917.\end{split} \tag{49}\]
The plots of the functions \(f_{s}(s),\)\(f_{a}(s),\) and \(g(s)\) are presented in Fig. 1.
Introduce the relativistic coordinate representation of the complex amplitudes \(s_{\alpha}\) as
\[s_{\lambda}(x):=\int\frac{d{\bf k}}{\sqrt{(2\pi)^{3}2|{\bf k}|}}e^{i{\bf k}{\bf x }}s_{\lambda}({\bf k};x^{0}), \tag{50}\]
and
\[\begin{array}{l}\psi_{s,\lambda}(x):=f_{s}^{1/2}(s)s_{\lambda}(x)=\int\frac{d {\bf k}_{3}f_{s}^{1/2}(s)}{\sqrt{(2\pi)^{3}2|{\bf k}_{3}|}}e^{i{\bf k}_{3}{\bf x }}s_{\lambda}({\bf k}_{3};x^{0}),\\ \psi_{a,\lambda}(x):=f_{a}^{1/2}(s)s_{\lambda}(x)=\int\frac{d{\bf k}_{3}f_{s}^ {1/2}(s)}{\sqrt{(2\pi)^{3}2|{\bf k}_{3}|}}e^{i{\bf k}_{3}{\bf x}}s_{\lambda}({ \bf k}_{3};x^{0}),\\ \psi_{g,\lambda}(x):=g^{1/2}(s)s_{\lambda}(x)=\int\frac{d{\bf k}_{3}g^{1/2}(s )}{\sqrt{(2\pi)^{3}2|{\bf k}_{3}|}}e^{i{\bf k}_{3}{\bf x}}s_{\lambda}({\bf k}_ {3};x^{0}),\end{array} \tag{51}\]
where \(f_{s}^{1/2}(s)\), \(f_{a}^{1/2}(s)\), and \(g_{s}^{1/2}(s)\) acting on \(s_{\lambda}(x)\) are understood as pseudodifferential operators with \(k_{3}^{i}=-i\partial/\partial x^{i}\). In that case, the susceptibility tensor can be cast into the form
\[\chi_{ij}(x;K)=\frac{8\alpha^{2}}{{\bf K}^{2}}\Big{[}(\psi_{s}^{\dagger}\psi_{ s})\delta_{ij}^{\perp}-i(\psi_{a}^{\dagger}\sigma_{2}\psi_{a})\varepsilon_{ ijk}n_{k}-\frac{1}{2}\big{(}(\psi_{g}^{\dagger}\sigma_{3}\psi_{g}) \sigma_{3}-(\psi_{g}^{\dagger}\sigma_{1}\psi_{g})\sigma_{1}\big{)}_{H^{\prime }}(\epsilon_{l})_{i}({\bf K})(\epsilon_{l^{\prime}})_{j}({\bf K})\Big{]}. \tag{52}\]
The last term in the square brackets can be simplified so that
\[\chi_{ij}(x;K)=\frac{8\alpha^{2}}{{\bf K}^{2}}\Big{\{}\big{[}(\psi_{s}^{ \dagger}\psi_{s})+\frac{1}{2}|\psi_{g,+}\psi_{g,-}|\big{]}\delta_{ij}^{\perp}- i(\psi_{a}^{\dagger}\sigma_{2}\psi_{a})\varepsilon_{ijk}n_{k}-|\psi_{g,+}\psi_{g,-}| e_{i}^{\varphi}({\bf K})e_{j}^{\varphi}({\bf K})\Big{\}}, \tag{53}\]
where \(\delta_{ij}^{\perp}:=\delta_{ij}-n_{i}n_{j}\) and \(e_{i}^{\varphi}\) is the polarization vector \(e_{i1}\) rotated by an angle of \(\varphi=-\arg(\psi_{g+}\psi_{g-})/2\) in the plane spanned by the vectors \(\{{\bf e}_{1},{\bf e}_{2}\}\). The susceptibility of a single photon wave packet is obtained when one retains the leading term in formulas (2), (20) for \(s_{\alpha}\to 0\). It is clear from these formulas that expression (53) also holds for the wave packet of a single photon, where \(s_{\lambda}({\bf k})\) should be interpreted as a single photon wave function.
The susceptibility tensor (53) corresponds to a birefringent gyrotropic dispersive medium. As is seen from asymptotics (48), the term related to gyrotropy is suppressed for \(s\ll 4m^{2}\), in particular, it is absent in the approach based on the Heisenberg-Euler Lagrangian. For infinitely small \(|{\bf K}|\), gyrotropy vanishes and the whole expression (53) tends to a finite nonzero limit. Furthermore, gyrotropy disappears in the case when \(s_{1}=0\) or \(s_{2}=0\). The last term in (53) vanishes for \(s_{+}=0\) or \(s_{-}=0\). In that case, the wave packet of a single photon is purely gyrotropic. Notice that for \(s\approx 4m^{2}\) the contribution of the term responsible for gyrotropy of the wave packet is of the same order as the main contribution to the susceptibility tensor standing at \(\delta_{ij}^{\perp}\).
Let us estimate the magnitude of the susceptibility (53). By the order of magnitude,
\[\chi_{ij}\sim\frac{8\alpha^{2}}{{\bf K}^{2}}f_{s}(s){\bf A}^{2}=\frac{2\alpha }{\pi}f_{s}(s)\frac{m^{2}}{{\bf K}^{2}}K_{u}^{2}, \tag{54}\]
where \({\bf A}(x)\) is the electromagnetic potential in the Coulomb gauge and
\[K_{u}^{2}:=e^{2}{\bf A}^{2}/m^{2} \tag{55}\]
Figure 1: The dependence of \(f_{s}(s)\), \(f_{a}(s)\), and \(g(s)\) on \(s^{\prime}=s/m^{2}\). The solid line is \(f_{s}(s)\), the dashed line is \(f_{a}(s)\), and the dashed dotted line is \(g(s)\).
is the undulator strength parameter characterizing the applicability of the standard perturbation theory [38, 39]. If \(K_{u}\ll 1\), then the perturbation theory is applicable, while for \(K_{u}\gtrsim 1\) the background field has to be taken into account nonperturbatively. If the energy of the hard probe photon is close to the electron-positron pair creation threshold, \(|{\bf K}|\sim 2m\), then \(f_{s}(s)\sim 1\) and
\[\chi_{ij}\sim\frac{\alpha}{2\pi}K_{u}^{2}. \tag{56}\]
For \(s\ll 4m^{2}\), we have \(f_{s}(s)\sim 11s^{2}/360m^{4}\) and
\[\chi_{ij}\sim\frac{\alpha}{16\pi}\frac{{\bf k}_{3}^{2}}{m^{2}}K_{u}^{2}. \tag{57}\]
In order to find the magnitude of the susceptibility of a single photon wave packet, one can employ the estimate
\[{\bf A}^{2}\sim n_{s}/|{\bf k}_{3}|, \tag{58}\]
where \(n_{s}\) is the photon number density at a given point. By the order of magnitude, \(n_{s}\sim\sigma_{s}^{3}\), where \(\sigma_{s}\) is the dispersion of momenta in the wave packet of a soft photon \(s_{\alpha}\). Then
\[\chi_{ij}\sim 8\alpha^{2}f_{s}(s)\frac{\sigma_{s}^{2}}{{\bf K}^{2}|{\bf k}_{3 }|}. \tag{59}\]
As for the probe photon with \(|{\bf K}|\sim 2m\), we obtain
\[\chi_{ij}\sim 2\alpha^{2}\frac{\sigma_{s}^{3}}{m^{2}|{\bf k}_{3}|}\lesssim 2 \alpha^{2}\frac{{\bf k}_{3}^{2}}{m^{2}}, \tag{60}\]
where we have taken \(\sigma_{s}\approx|{\bf k}_{3}|\) for the upper estimate. If \(s\ll 4m^{2}\), then
\[\chi_{ij}\sim\frac{\alpha^{2}}{4}\frac{|{\bf k}_{3}|\sigma_{s}^{2}}{m^{4}} \lesssim\frac{\alpha^{2}}{4}\frac{{\bf k}_{3}^{4}}{m^{4}}. \tag{61}\]
For example, the quantity on the right-hand side of (61) is equal to \(1.95\times 10^{-28}\) for the photon in the state \(s_{\alpha}\) with the energy 1 eV.
## 4 Inclusive probability
Let us find the explicit expression for the inclusive probability (22). From (29) we have
\[\Phi_{\bar{\beta}\beta}h_{\beta}=i\sqrt{\frac{(2\pi)^{3}}{V}}\sum_{\lambda_{1},\lambda_{2},\lambda_{3}}\int\frac{d{\bf k}_{3}d{\bf k}_{2}}{4(2\pi)^{2}} \delta(k_{3}^{0}+k_{4}^{0}-k_{1}^{0}-k_{2}^{0})\bar{s}_{\lambda_{3}}({\bf k}_ {3})s_{\lambda_{1}}({\bf k}_{1})\frac{M_{\lambda_{3}\lambda_{4}\lambda_{1} \lambda_{2}}(s,t,u)h_{\lambda_{2}}({\bf k}_{2})}{\sqrt{|{\bf k}_{1}||{\bf k}_{ 2}||{\bf k}_{3}||{\bf k}_{4}|}}\Big{|}_{{\bf k}_{1}={\bf k}_{3}+\Delta{\bf k}}. \tag{62}\]
To reveal the main features of expression (22), we assume that \(|\Delta{\bf k}|\) not only satisfies the conditions (43) but is much less than the typical scale of variation of the complex amplitude \(s_{\lambda}({\bf k})\). In the coordinate space, this condition means that the typical scale of variation of the wave function or of the average electromagnetic field of soft photons in the state \(s_{\alpha}\) is much less than the diameter of the region of localization of the wave function of the hard probe photon \(h_{\beta}\). Notice that, in the plane-wave limit for the state \(h_{\beta}\), where \(|\Delta{\bf k}|\to 0\), all the above conditions are satisfied.
Then one can neglect the dependence on \(\Delta{\bf k}\) and put \({\bf k}_{1}\approx{\bf k}_{3}\) and \({\bf k}_{2}\approx{\bf k}_{4}\) in all the functions appearing in the integrand of (62) apart from the delta function and \(h_{\lambda_{2}}({\bf k}_{2})=h_{\lambda_{2}}({\bf k}_{4}-\Delta{\bf k})\). As regards the argument of the delta function, we have
\[k_{3}^{0}+k_{4}^{0}-k_{1}^{0}-k_{2}^{0}\approx({\bf n}_{4}-{\bf n}_{3})\Delta{ \bf k}. \tag{63}\]
Introduce the splitting
\[{\bf k}_{2}={\bf k}_{2||}+{\bf k}_{2\perp}, \tag{64}\]
where
\[{\bf k}_{2||}:=({\bf n}_{4}-{\bf n}_{3})\frac{({\bf k}_{2}({\bf n}_{4}-{\bf n }_{3}))}{({\bf n}_{4}-{\bf n}_{3})^{2}}, \tag{65}\]
and analogously for other vectors. Integrating the delta function in (62), we come to
\[\Phi_{\bar{\beta}\beta}h_{\beta}=i\sqrt{\frac{(2\pi)^{3}}{V}}\sum_{\lambda_{1},\lambda_{2},\lambda_{3}}\int\frac{d{\bf k}_{3}}{(2\pi)^{2}}\frac{\bar{s}_{ \lambda_{3}}({\bf k}_{3})s_{\lambda_{1}}({\bf k}_{3})M_{\lambda_{3}\lambda_{4} \lambda_{1}\lambda_{2}}\bar{h}_{\lambda_{2}}({\bf k}_{4\parallel})}{4|{\bf n}_{ 4}-{\bf n}_{3}||{\bf k}_{3}||{\bf k}_{4}|}, \tag{66}\]
where
\[\tilde{h}_{\lambda_{2}}({\bf k}_{4\parallel}):=\int\!d{\bf k}_{4\perp}h_{\lambda_{ 2}}({\bf k}_{4\parallel},{\bf k}_{4\perp}). \tag{67}\]
To simplify further the expression (66), we suppose that the complex amplitude \(s_{\alpha}\) is such that the dispersion of the vector \({\bf n}_{3}\) in this state is small, i.e., this state of photons is paraxial. Setting \({\bf n}_{3}={\bf n}_{30}\), where \({\bf n}_{30}\) is the average value of \({\bf n}_{3}\) in the state \(s_{\alpha}\), and using the approximate expressions for the invariant scattering amplitudes (39), the wave function of the hard probe photon after scattering (20) is given by
\[(h_{t})_{\tilde{\beta}}=\sqrt{\frac{(2\pi)^{3}}{V}}\big{[}h_{\lambda_{4}}({\bf k }_{4})+i\varkappa\sum_{\lambda_{2}}(\xi_{0}+\mathbf{\xi\sigma})_{ \lambda_{4}\lambda_{2}}\tilde{h}_{\lambda_{2}}({\bf k}_{4\parallel})\big{]}, \tag{68}\]
where
\[\varkappa = \frac{\alpha^{2}}{2\pi^{2}|{\bf k}_{4\parallel}|{\bf n}_{4}-{\bf n }_{30}|},\] \[\xi_{0} = \int\frac{d{\bf k}_{3}}{|{\bf k}_{3}|}f_{s}(s)s^{\dagger}({\bf k} _{3})s({\bf k}_{3}),\] \[\xi_{1} = -\int\frac{d{\bf k}_{3}}{2|{\bf k}_{3}|}g(s)s^{\dagger}({\bf k}_{ 3})\sigma_{1}s({\bf k}_{3}), \tag{69}\] \[\xi_{2} = \int\frac{d{\bf k}_{3}}{2|{\bf k}_{3}|}g(s)s^{\dagger}({\bf k}_{ 3})\sigma_{2}s({\bf k}_{3}),\] \[\xi_{3} = -\int\frac{d{\bf k}_{3}}{|{\bf k}_{3}|}f_{a}(s)s^{\dagger}({\bf k} _{3})\sigma_{3}s({\bf k}_{3}),\]
and \(s=|{\bf k}_{3}|{\bf k}_{4}|({\bf n}_{4}-{\bf n}_{30})^{2}\). Let us stress that expressions (69) are written in the chiral basis. As for the basis of linear polarization vectors \({\bf e}_{1,2}\), the corresponding expressions take the form
\[\xi_{0}^{l} = \int\frac{d{\bf k}_{3}}{|{\bf k}_{3}|}f_{s}(s)s^{\dagger}({\bf k} _{3})s({\bf k}_{3}),\] \[\xi_{1}^{l} = \int\frac{d{\bf k}_{3}}{2|{\bf k}_{3}|}g(s)s^{\dagger}({\bf k}_{3 })\sigma_{1}s({\bf k}_{3}), \tag{70}\] \[\xi_{2}^{l} = \int\frac{d{\bf k}_{3}}{|{\bf k}_{3}|}f_{a}(s)s^{\dagger}({\bf k} _{3})\sigma_{2}s({\bf k}_{3}),\] \[\xi_{3}^{l} = -\int\frac{d{\bf k}_{3}}{2|{\bf k}_{3}|}g(s)s^{\dagger}({\bf k}_{ 3})\sigma_{3}s({\bf k}_{3}),\]
where \(s_{l}({\bf k}_{3})\) are also given in the basis of linear polarization vectors. In particular, if \(s_{1}=0\) or \(s_{2}=0\), then \(\xi_{2}^{l}=0\). If \(s_{+}=0\) or \(s_{-}=0\), then \(\xi_{1}^{l}=\xi_{3}^{l}=0\).
The formulas above are easily generalized to the case where the initial state of the probe photon is a mixed one with the density matrix
\[\rho_{\beta\beta^{\prime}}=\frac{(2\pi)^{3}}{V}\frac{(1+\mathbf{\zeta }({\bf k}_{2};{\bf k}^{\prime}_{2})\mathbf{\sigma})_{\lambda_{2} \lambda_{2}^{\prime}}}{2}\rho({\bf k}_{2};{\bf k}^{\prime}_{2}). \tag{71}\]
Supposing that \(\rho({\bf k}_{2},{\bf k}^{\prime}_{2})\) is different from zero only in a small vicinity of the diagonal, we can write
\[\rho_{\beta\beta^{\prime}}\approx\frac{(2\pi)^{3}}{V}\frac{(1+\mathbf{ \zeta}\mathbf{\sigma})_{\lambda_{2}\lambda_{2}^{\prime}}}{2}\rho({ \bf k}_{2};{\bf k}^{\prime}_{2}), \tag{72}\]
where \(\mathbf{\zeta}:=\mathbf{\zeta}({\bf k}_{2};{\bf k}_{2})\). We also assume that the detector records plane-wave photons with the momentum \({\bf k}_{4}\). In this case, the expression standing at the projector \(D_{\beta^{\prime}\vec{\beta}}\) in formula (22) for the inclusive probability becomes
\[\frac{(2\pi)^{3}}{V}\frac{1}{2}\big{\{}\rho(1+\mathbf{\zeta\sigma})+i \varkappa[(\xi_{0}+\mathbf{\xi\sigma})(1+\mathbf{\zeta\sigma}) \tilde{\rho}-c.c.]\big{\}}_{\lambda_{4}\lambda_{4}^{\prime}}, \tag{73}\]
where \(\rho:=\rho({\bf k}_{4\parallel};{\bf k}_{4})\) and
\[\tilde{\rho}=\tilde{\rho}({\bf k}_{4\parallel};{\bf k}_{4})=\int\!d{\bf k}_{4 \perp}\rho({\bf k}_{4\parallel},{\bf k}_{4\perp};{\bf k}^{\prime}_{4})\Big{|} _{{\bf k}^{\prime}_{4}={\bf k}_{4}}. \tag{74}\]
Let the detector record the hard probe photons in some spin state specified by the projector \(D^{(s)}_{\lambda^{\prime}_{4}\lambda_{4}}\). Then the inclusive probability (22) to record a hard photon in this state is written as
\[dP_{D}=\frac{1}{2}\sum_{\lambda_{4},\lambda^{\prime}_{4}}D^{(s)}_{\lambda^{ \prime}_{4}\lambda_{4}}\Big{\{}\rho-2\varkappa(\xi_{0}+\mathbf{\xi \zeta})\,{\rm Im}\,\tilde{\rho}+\big{[}\rho\mathbf{\zeta}-2\varkappa( \mathbf{\xi}+\xi_{0}\mathbf{\zeta})\,{\rm Im}\,\tilde{\rho}-2 \varkappa[\mathbf{\xi},\mathbf{\zeta}]\,{\rm Re}\,\tilde{\rho} \big{]}\mathbf{\sigma}\Big{\}}_{\lambda_{4}\lambda^{\prime}_{4}}d{\bf k }_{4}. \tag{75}\]
Recall that this expression is obtained in the leading order of perturbation theory and describes the interference of a free passed wave with its scattered part. This expression is valid only in the parameter domain where the overlap of the interfering waves is substantial. The correction to the trivial (free) contribution to the inclusive probability turns out to be of the order \(\alpha^{2}\) rather than \(\alpha^{4}\) as for the standard expression for the light-by-light scattering cross-section [26]. Moreover, in deriving expression (75), it has been assumed that the wave packet of the probe photon is sufficiently narrow in the momentum space, i.e., \(|\Delta{\bf k}|\) obeys conditions (43) and is much less than the typical scale of variation of the wave function of soft photons \(s_{\alpha}\). The complex amplitude \(s_{\alpha}\) describing the state of soft photons has been supposed to be paraxial.
Consider some particular cases of general formula (75). If the probe photons are naturally polarized, viz., \(\boldsymbol{\zeta}=0\), then under the above assumptions formula (75) implies
\[dP_{D}=\frac{1}{2}\sum_{\lambda_{4},\lambda^{\prime}_{4}}D^{(s)}_{\lambda^{ \prime}_{4}\lambda_{4}}\big{[}\rho-2\varkappa(\xi_{0}+\boldsymbol{\xi} \boldsymbol{\sigma})\operatorname{Im}\tilde{\rho}\big{]}_{\lambda_{4}\lambda^ {\prime}_{4}}d{\bf k}_{4}. \tag{76}\]
In this case, the nontrivial contributions to the inclusive probability stem from imaginary part of the density matrix of the hard probe photon in the momentum space. The hard photons being initially in the state (72) with \(\boldsymbol{\zeta}=0\) become polarized with the Stokes vector proportional to the vector \(\boldsymbol{\xi}\). In general, the presence of the imaginary contribution to the density matrix of a probe photon gives rise to the following transform of the Stokes parameters:
\[\zeta^{0}\to\zeta^{0}=\zeta^{0}-2\varkappa(\xi_{0}+\boldsymbol{\xi} \boldsymbol{\zeta})\frac{\operatorname{Im}\tilde{\rho}}{\rho},\qquad \boldsymbol{\zeta}\to\boldsymbol{\zeta}^{\prime}=\boldsymbol{\zeta}-2 \varkappa(\boldsymbol{\xi}+\xi_{0}\boldsymbol{\zeta})\frac{\operatorname{Im} \tilde{\rho}}{\rho}. \tag{77}\]
The imaginary contributions to the density matrix are absent for a usual narrow (in the momentum space) Gaussian wave packet. However, the imaginary part of \(\tilde{\rho}\) may appear due to nontrivial structure of the wave packet. For example, such imaginary contributions exist for twisted and Airy states, coherent superposition of several Gaussians, and others (see, e.g., [40, 41, 42]). If \(\operatorname{Im}\tilde{\rho}=0\), then
\[dP_{D}=\frac{1}{2}\sum_{\lambda_{4},\lambda^{\prime}_{4}}D^{(s)}_{\lambda^{ \prime}_{4}\lambda_{4}}\big{[}\rho+\big{(}\rho\boldsymbol{\zeta}-2\varkappa [\boldsymbol{\xi},\boldsymbol{\zeta}]\operatorname{Re}\tilde{\rho}\big{)} \boldsymbol{\sigma}\big{]}_{\lambda_{4}\lambda^{\prime}_{4}}d{\bf k}_{4}. \tag{78}\]
As a result of interaction with photons in the state \(s_{\alpha}\), the Stokes vector of the probe photon is changed in accordance with the rule (cf. [26, 27, 28, 29, 30])
\[\boldsymbol{\zeta}\to\boldsymbol{\zeta}^{\prime}=\boldsymbol{\zeta}-2 \varkappa[\boldsymbol{\xi},\boldsymbol{\zeta}]\frac{\operatorname{Re}\tilde{ \rho}}{\rho}. \tag{79}\]
As we see, in this case the Stokes vector \(\boldsymbol{\zeta}\) precesses around the vector \(\boldsymbol{\xi}\). The polarization degree of a hard probe photon, \(|\boldsymbol{\zeta}|\), is conserved [26, 27, 28, 29, 30] up to the terms of higher order in the coupling constant. The precession frequency depends substantially on the form of the density matrix of the probe photon. In the general case described by formula (75), the Stokes vector undergoes simultaneous transforms given by (77) and (79).
Let us estimate a relative magnitude of the quantum corrections in (75), (77), and (79). By the order of magnitude, the relative value of this correction equals \(2\varkappa\xi_{0}(\sigma^{h}_{\perp})^{2}\), where \(\sigma^{h}_{\perp}\) is the dispersion of the transverse momentum component in the wave packet of the probe photon \(h_{\beta}\) and \(\varkappa\sim\alpha^{2}/2\pi^{2}|{\bf k}_{4}|\). Thus,
\[2\varkappa\xi_{0}(\sigma^{h}_{\perp})^{2}\sim\frac{\alpha^{2}}{\pi^{2}}f_{s}( s)\frac{E_{s}(\sigma^{h}_{\perp})^{2}}{{\bf k}_{3}^{2}|{\bf k}_{4}|}, \tag{80}\]
where \(E_{s}\) is the average energy of photons in the state \(s_{\alpha}\). For the photons from the state \(s_{\alpha}\) to participate in the reaction, their wave functions must overlap with the wave function of the probe photon. Therefore, \(E_{s}\sim w_{s}/[\sigma^{h}_{\parallel}(\sigma^{h}_{\perp})^{2}]\), where \(w_{s}\) is the energy density of photons in the state \(s_{\alpha}\) and \(\sigma^{h}_{\parallel}\) is the dispersion of the longitudinal momentum component in the state of the probe photon (72). As a result,
\[2\varkappa\xi_{0}(\sigma^{h}_{\perp})^{2}\sim\frac{\alpha^{2}}{\pi^{2}}\frac{ f_{s}(s)w_{s}}{{\bf k}_{3}^{2}|{\bf k}_{4}|\sigma^{h}_{\parallel}}. \tag{81}\]
In the case when the probe photon energy is near the electron-positron pair creation threshold, \(|{\bf k}_{4}|\sim 2m\), we have \(f_{s}(s)\sim 1\) and
\[2\varkappa\xi_{0}(\sigma^{h}_{\perp})^{2}\sim\frac{\alpha^{2}}{2\pi^{2}}\frac{ w_{s}}{m{\bf k}_{3}^{2}\sigma^{h}_{\parallel}}=2.5\times 10^{-7}\Big{[}w\frac{ \operatorname{cm}^{3}}{\operatorname{J}}\Big{]}\frac{\operatorname{eV}^{3}}{{ \bf k}_{2}^{2}\sigma^{h}_{\parallel}}. \tag{82}\]
In particular, for the laser beam with intensity \(I=10^{18}\) W/cm\({}^{2}\)[39] and, correspondingly, the energy density \(w_{s}=I/c=3.33\times 10^{7}\) J/cm\({}^{3}\), we obtain
\[2\varkappa\xi_{0}(\sigma_{\perp}^{h})^{2}\sim 8.5\frac{\mathrm{e}^{\mathrm{V}^{3}} }{\mathbf{k}_{3}^{2}\sigma_{\parallel}^{h}}. \tag{83}\]
If the energy of a soft photon \(|\mathbf{k}_{3}|\) and the dispersion \(\sigma_{\parallel}^{h}\) are such that this quantity is of order of unity or larger, then one needs to take into account multiple scattering of the hard probe photon on the photons in the state \(s_{\alpha}\). As is seen from the estimate (83), the contribution of the quantum correction can be rather large for reasonable values of the parameters of the beam \(s_{\alpha}\) and the state of the probe photon. For \(s\ll 4m^{2}\), carrying out the same estimates, we arrive at
\[2\varkappa\xi_{0}(\sigma_{\perp}^{h})^{2}\sim\frac{11\alpha^{2}}{360\pi^{2}} \frac{w_{s}|\mathbf{k}_{4}|}{m^{2}\sigma_{\parallel}^{h}}=1.2\times 10^{-25} \frac{|\mathbf{k}_{4}|}{\sigma_{\parallel}^{h}}\Big{[}w\frac{\mathrm{cm}^{3} }{\mathrm{J}}\Big{]}. \tag{84}\]
As for the wave packet of a single photon in the state \(s_{\alpha}\), we have \(w_{s}\sim|\mathbf{k}_{3}|n_{s}\sim|\mathbf{k}_{3}|\sigma_{s}^{3}\), where \(\sigma_{s}\) is the momentum dispersion in the wave packet \(s_{\alpha}\). Hence, for \(s\approx 4m^{2}\), we deduce
\[2\varkappa\xi_{0}(\sigma_{\perp}^{h})^{2}\sim 5.3\times 10^{-12}\Big{[}\frac{ \sigma_{s}^{3}\mathrm{eV}^{-1}}{|\mathbf{k}_{3}|\sigma_{\parallel}^{h}}\Big{]}. \tag{85}\]
For \(s\ll 4m^{2}\), we come to
\[2\varkappa\xi_{0}(\sigma_{\perp}^{h})^{2}\sim 2.4\times 10^{-30}\frac{|\mathbf{k} _{4}|}{\sigma_{\parallel}^{h}}\Big{[}\frac{|\mathbf{k}_{3}|\sigma_{\parallel }^{3}}{\mathrm{eV}^{4}}\Big{]}. \tag{86}\]
As expected, the interference effect caused by scattering of a photon by a photon is very small in this case.
## 5 Conclusion
Let us sum up the results. We have considered the interference effect in photon-photon scattering where the free passed part of the wave function interferes with its scattered part. The forms of the wave packets of the probe photon and of the test photon have been fully taken into account. We have restricted our considerations to the case where the probe photon is hard and its state is described by some density matrix, whereas the test photons are soft and are prepared in some one particle or coherent states. Only the leading contributions of the perturbation theory to the inclusive probability to record the probe photon have been retained. In the case we have considered, these contributions stand at zeroth and second powers of the fine structure constant \(\alpha\)[26] and, in fact, describe the evolution of the probe photon wave function traversing an effective dispersive medium represented by the soft test photons. Notice that the standard leading contribution to the cross-section of light-by-light scattering is of order \(\alpha^{4}\), and we have not taken into account this contribution. Moreover, we have supposed that the total energy of the probe and test photons is below the electron-positron pair creation threshold, i.e., the abovementioned medium is transparent.
If the wave function of hard probe photon is sufficiently narrow in the momentum space then it is reasonable to employ the small recoil approximation in considering the interference effect. Using this approximation, we have obtained the general and rather compact expression (53) for the on-shell susceptibility tensor of a beam of photons and of a single photon wave packet. This tensor describes a birefringent gyrotropic dispersive medium. At the small probe photon momenta, it takes a finite nonzero value and gyrotropy disappears. In increasing the probe photon momenta, the components of the susceptibility tensor rapidly increase and at the electron-positron pair creation threshold gyrotropy becomes of the same order of magnitude as the other contributions to the susceptibility tensor. We have found the estimates for the order of magnitude of the susceptibility tensor in different regimes. Furthermore, using the formalism developed in Sec. 3, in Appendix B we have generalized the expression for the susceptibility tensor of a single electron wave packet derived in [1] to a nonstationary case.
Assuming that the recoil momentum is much less than the typical scale of variation of the test photon wave functions in the momentum space and that the state of test photons is paraxial, we have simplified the general expression for the inclusive probability to record a probe photon to formula (75). This formula shows that, in passing through the effective medium, the Stokes parameters of the probe photon change and this effect is rather large for a strong beam of test photons near the electron-positron pair creation threshold [26]. We have found formulas (77) and (79) for the evolution of the Stokes parameters that generalize the analogous
expression obtained in [26, 27, 28, 29, 30] for plane waves. It appears the evolution of the Stokes vector strongly depends on the form of the probe photon wave packet. We have provided the estimates for the order of magnitude of this effect in various regimes. The estimate (83) shows that this effect can be observed at present and planned facilities in the experiments where the control of polarization of hard probe photon and of its wave packet profile is possible [31, 32, 33, 34, 35].
Acknowledgments.The reported study was supported by the Russian Ministry of Education and Science, the contract FSWM-2020-0033.
## Appendix A Traces
In deriving the general expression for the inclusive probability to record a photon, it is necessary to evaluate the traces of operators (16). Let us present here some details of these calculations. As regards the first trace, we obtain
\[\mathrm{Sp}(\hat{R}_{ph}\hat{\Pi}_{D})=\langle\bar{d}|(1-:\exp(-\hat{c}^{ \dagger}D\hat{c}):)|d\rangle e^{-\bar{d}d}=1-e^{-\bar{d}Dd}, \tag{87}\]
where we have used the fact that the coherent state,
\[|d\rangle=e^{d\hat{c}^{\dagger}}|0\rangle, \tag{88}\]
is an eigenvector for the annihilation operator \(\hat{c}_{\beta}\) with the eigenvalue \(d_{\beta}\). As far as the second trace is concerned, we have
\[\mathrm{Sp}(\hat{R}_{ph}\hat{\Pi}_{D}\hat{C}) =e^{-\bar{d}d}\langle\bar{d}|(1-:\exp(-\hat{c}^{\dagger}D\hat{c} ):)|\hat{c}^{\dagger}_{\alpha}\hat{c}^{\dagger}_{\beta}C_{\bar{\alpha}\bar{ \beta}\alpha\beta}\hat{c}_{\alpha}\hat{c}_{\beta}|d\rangle= \tag{89}\] \[=e^{-\bar{d}d}d_{\alpha}d_{\beta}C_{\bar{\alpha}\bar{\beta} \alpha\beta}\frac{\delta}{\delta d_{\alpha}}\frac{\delta}{\delta d_{\beta}} \langle\bar{d}|(1-:\exp(-\hat{c}^{\dagger}D\hat{c}):)|d\rangle=\] \[=\big{[}\bar{d}_{\bar{\alpha}}\bar{d}_{\bar{\beta}}-(\bar{d} \tilde{D})_{\bar{\alpha}}(\bar{d}\tilde{D})_{\bar{\beta}}e^{-\bar{d}Dd}\big{]} C_{\bar{\alpha}\bar{\beta}\alpha\beta}d_{\alpha}d_{\beta}.\]
## Appendix B Susceptibility of a single electron wave packet
In the paper [1], the explicit expression for the on-shell susceptibility tensor of a single electron wave packet was obtained. In deriving this expression, certain approximations were made that, in particular, allowed one to consider the electron wave packet as some stationary medium. The last condition can be relaxed by conducting the calculations along lines of Sec. 3. In this Appendix, we provide a brief derivation of the expression for the susceptibility tensor of an electron wave packet in a nonstationary case.
In the paper [1], formula (106) was derived for the matrix \(\Phi_{\bar{\beta}\beta}\) in the limit of a small recoil \(\Delta\mathbf{k}\). It reads
\[\Phi(\lambda^{\prime},\mathbf{k}^{\prime};\lambda,\mathbf{k})=-2\pi ie^{2} \frac{e^{(\lambda^{\prime})}_{i}(\mathbf{k}^{\prime})e^{(\lambda)}_{i}( \mathbf{k})}{2V\sqrt{k^{\prime}_{0}k_{0}}}\sum_{s}\int\frac{d\mathbf{p}}{E( \mathbf{p})}\delta(p_{0}+k_{0}-p^{\prime}_{0}-k^{\prime}_{0})\sum_{N=1}^{ \infty}\rho^{(N,1)}_{ss}(\mathbf{p},\mathbf{p}-\Delta\mathbf{k}). \tag{90}\]
Representing the delta function as
\[\delta(p_{0}+k_{0}-p^{\prime}_{0}-k^{\prime}_{0})=\int\frac{dx^{0}}{2\pi}e^{- i(p_{0}+k_{0}-p^{\prime}_{0}-k^{\prime}_{0})x^{0}}, \tag{91}\]
introducing the density matrix at the instant of time \(x^{0}\),
\[\rho(\mathbf{p},\mathbf{p}-\Delta\mathbf{k};x^{0}):=e^{-ik_{0}x^{0}}\rho( \mathbf{p},\mathbf{p}-\Delta\mathbf{k})e^{ik^{\prime}_{0}x^{0}}, \tag{92}\]
and the relativistic density matrix in the coordinate representation,
\[\rho(\mathbf{x},\mathbf{y};x^{0}):=\int\frac{d\mathbf{p}d\mathbf{p}^{\prime}m }{(2\pi)^{3}\sqrt{E(\mathbf{p})E(\mathbf{p}^{\prime})}}e^{i\mathbf{p}\mathbf{ x}-i\mathbf{p}^{\prime}\mathbf{y}}\rho(\mathbf{p},\mathbf{p}^{\prime};x^{0}), \tag{93}\]
it is not difficult to cast expression (90) into the form
\[\Phi(\lambda^{\prime},\mathbf{k}^{\prime};\lambda,\mathbf{k})=-ie^{2}\frac{e^ {(\lambda^{\prime})}_{i}(\mathbf{k}^{\prime})e^{(\lambda)}_{i}(\mathbf{k})}{2 mV\sqrt{k^{\prime}_{0}k_{0}}}\int d^{4}xe^{i(k^{\prime}-k)_{\mu}x^{\mu}}\rho( \mathbf{x},\mathbf{x};x^{0}), \tag{94}\]
where
\[\rho(\mathbf{x},\mathbf{x};x^{0}):=\sum_{s}\sum_{N=1}^{\infty}N\rho_{ss}^{(N,1)}( \mathbf{x},\mathbf{x};x^{0}). \tag{95}\]
Comparing the expression for \(\Phi_{\tilde{\beta}\beta}\) with the amplitude of scattering by a dielectric medium (33), we conclude that, in the small recoil limit, the susceptibility tensor turns out to be
\[\chi_{ij}(x;\mathbf{K})=-\frac{4\pi\alpha\rho(\mathbf{x},\mathbf{x};x^{0})}{mK _{0}^{2}}\delta_{ij}. \tag{96}\]
This expression coincides with the susceptibility tensor of an electron plasma. Notice that formula (96) is also valid for a single electron wave packet [1].
| ```
単一光子波包の光子質量殻での感受性テンソルの明示的なコンパクト表現が導出されました。
探検光子は硬質とされ、テスト光子は軟質とされ、2つの光子の総エネルギーが電子-positron対生成閾値未満であると想定されます。
光子波包は光子相互作用の過程で、双折射的な回転微分分散媒質として扱えます。
光子相互作用の過程で探検光子の確率が明示的に求められます。
これは、光子波動関数の自由通過と散乱部分の干渉効果が支配的である第一の非 trivial な perturbaion理論のオーダーで得られます。
この効果は $\alpha^2$ のオーダーで、光子相互作用のクロスセクションの標準的な寄与に比べて $\alpha^4$ のオーダーです。
探検 |